id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,367,890,622 | https://api.github.com/repos/huggingface/datasets/issues/6992 | https://github.com/huggingface/datasets/issues/6992 | 6,992 | Dataset with streaming doesn't work with proxy | open | 1 | 2024-06-22T16:12:08 | 2024-06-25T15:43:05 | null | YHL04 | [] | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both... | false |
2,367,711,094 | https://api.github.com/repos/huggingface/datasets/issues/6991 | https://github.com/huggingface/datasets/pull/6991 | 6,991 | Unblock NumPy 2.0 | closed | 21 | 2024-06-22T09:19:53 | 2024-12-25T17:57:34 | 2024-07-12T12:04:53 | NeilGirdhar | [] | Fixes https://github.com/huggingface/datasets/issues/6980 | true |
2,366,660,785 | https://api.github.com/repos/huggingface/datasets/issues/6990 | https://github.com/huggingface/datasets/issues/6990 | 6,990 | Problematic rank after calling `split_dataset_by_node` twice | closed | 1 | 2024-06-21T14:25:26 | 2024-06-25T16:19:19 | 2024-06-25T16:19:19 | yzhangcs | [] | ### Describe the bug
I'm trying to split `IterableDataset` by `split_dataset_by_node`.
But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`.
### Steps to reproduce the bug
Here is the minimal code for reproduction:
```py
>>> from datasets import load_dataset
>>... | false |
2,365,556,449 | https://api.github.com/repos/huggingface/datasets/issues/6989 | https://github.com/huggingface/datasets/issues/6989 | 6,989 | cache in nfs error | open | 1 | 2024-06-21T02:09:22 | 2025-01-29T11:44:04 | null | simplew2011 | [] | ### Describe the bug
- When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory
- When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory
- The default is to use the path of tempfile.tempdir
- If I modify this path to the N... | false |
2,364,129,918 | https://api.github.com/repos/huggingface/datasets/issues/6988 | https://github.com/huggingface/datasets/pull/6988 | 6,988 | [`feat`] Move dataset card creation to method for easier overriding | open | 6 | 2024-06-20T10:47:57 | 2024-06-21T16:04:58 | null | tomaarsen | [] | Hello!
## Pull Request overview
* Move dataset card creation to method for easier overriding
## Details
It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co/datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the d... | true |
2,363,728,190 | https://api.github.com/repos/huggingface/datasets/issues/6987 | https://github.com/huggingface/datasets/pull/6987 | 6,987 | Remove beam | closed | 2 | 2024-06-20T07:27:14 | 2024-06-26T19:41:55 | 2024-06-26T19:35:42 | albertvillanova | [] | Remove beam, as part of the 3.0 release. | true |
2,362,584,179 | https://api.github.com/repos/huggingface/datasets/issues/6986 | https://github.com/huggingface/datasets/pull/6986 | 6,986 | Add large_list type support in string_to_arrow | closed | 1 | 2024-06-19T14:54:25 | 2024-08-12T14:43:48 | 2024-08-12T14:43:47 | arthasking123 | [] | add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py
Fix #6984
| true |
2,362,378,276 | https://api.github.com/repos/huggingface/datasets/issues/6985 | https://github.com/huggingface/datasets/issues/6985 | 6,985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | closed | 14 | 2024-06-19T13:22:28 | 2025-03-14T18:47:53 | 2024-06-25T05:40:51 | firmai | [] | ### Describe the bug
I have been struggling with this for two days, any help would be appreciated. Python 3.10
```
from setfit import SetFitModel
from huggingface_hub import login
access_token_read = "cccxxxccc"
# Authenticate with the Hugging Face Hub
login(token=access_token_read)
# Load the models fr... | false |
2,362,143,554 | https://api.github.com/repos/huggingface/datasets/issues/6984 | https://github.com/huggingface/datasets/issues/6984 | 6,984 | Convert polars DataFrame back to datasets | closed | 1 | 2024-06-19T11:38:48 | 2024-08-12T14:43:46 | 2024-08-12T14:43:46 | ljw20180420 | [
"enhancement"
] | ### Feature request
This returns error.
```python
from datasets import Dataset
dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]})
Dataset.from_polars(dsdf.to_polars())
```
ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent.
### Motivation
When datasets... | false |
2,361,806,201 | https://api.github.com/repos/huggingface/datasets/issues/6983 | https://github.com/huggingface/datasets/pull/6983 | 6,983 | Remove metrics | closed | 2 | 2024-06-19T09:08:55 | 2024-06-28T06:57:38 | 2024-06-28T06:51:30 | albertvillanova | [] | Remove all metrics, as part of the 3.0 release.
Note they are deprecated since 2.5.0 version. | true |
2,361,661,469 | https://api.github.com/repos/huggingface/datasets/issues/6982 | https://github.com/huggingface/datasets/issues/6982 | 6,982 | cannot split dataset when using load_dataset | closed | 3 | 2024-06-19T08:07:16 | 2024-07-08T06:20:16 | 2024-07-08T06:20:16 | cybest0608 | [] | ### Describe the bug
when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document,
This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for da... | false |
2,361,520,022 | https://api.github.com/repos/huggingface/datasets/issues/6981 | https://github.com/huggingface/datasets/pull/6981 | 6,981 | Update docs on trust_remote_code defaults to False | closed | 2 | 2024-06-19T07:12:21 | 2024-06-19T14:32:59 | 2024-06-19T14:26:37 | albertvillanova | [] | Update docs on trust_remote_code defaults to False.
The docs needed to be updated due to this PR:
- #6954 | true |
2,360,909,930 | https://api.github.com/repos/huggingface/datasets/issues/6980 | https://github.com/huggingface/datasets/issues/6980 | 6,980 | Support NumPy 2.0 | closed | 0 | 2024-06-18T23:30:22 | 2024-07-12T12:04:54 | 2024-07-12T12:04:53 | NeilGirdhar | [
"enhancement"
] | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was ... | false |
2,360,175,363 | https://api.github.com/repos/huggingface/datasets/issues/6979 | https://github.com/huggingface/datasets/issues/6979 | 6,979 | How can I load partial parquet files only? | closed | 12 | 2024-06-18T15:44:16 | 2024-06-21T17:09:32 | 2024-06-21T13:32:50 | lucasjinreal | [] | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | false |
2,359,511,469 | https://api.github.com/repos/huggingface/datasets/issues/6978 | https://github.com/huggingface/datasets/pull/6978 | 6,978 | Fix regression for pandas < 2.0.0 in JSON loader | closed | 3 | 2024-06-18T10:26:34 | 2024-06-19T06:23:24 | 2024-06-19T05:50:18 | albertvillanova | [] | A regression was introduced for pandas < 2.0.0 in PR:
- #6914
As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html
This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on ... | true |
2,359,295,045 | https://api.github.com/repos/huggingface/datasets/issues/6977 | https://github.com/huggingface/datasets/issues/6977 | 6,977 | load json file error with v2.20.0 | closed | 2 | 2024-06-18T08:41:01 | 2024-06-18T10:06:10 | 2024-06-18T10:06:09 | xiaoyaolangzhi | [] | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = p... | false |
2,357,107,203 | https://api.github.com/repos/huggingface/datasets/issues/6976 | https://github.com/huggingface/datasets/pull/6976 | 6,976 | Ensure compatibility with numpy 2.0.0 | closed | 2 | 2024-06-17T11:29:22 | 2024-06-19T14:30:32 | 2024-06-19T14:04:34 | KennethEnevoldsen | [] | Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.
The following fix should resolve the issue.
error found during testing on the MTEB repository e.g. [here](https://github.c... | true |
2,357,003,959 | https://api.github.com/repos/huggingface/datasets/issues/6975 | https://github.com/huggingface/datasets/pull/6975 | 6,975 | Set temporary numpy upper version < 2.0.0 to fix CI | closed | 2 | 2024-06-17T10:36:54 | 2024-06-17T12:49:53 | 2024-06-17T12:43:56 | albertvillanova | [] | Set temporary numpy upper version < 2.0.0 to fix CI. See: https://github.com/huggingface/datasets/actions/runs/9546031216/job/26308072017
```
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.... | true |
2,355,517,362 | https://api.github.com/repos/huggingface/datasets/issues/6973 | https://github.com/huggingface/datasets/issues/6973 | 6,973 | IndexError during training with Squad dataset and T5-small model | closed | 2 | 2024-06-16T07:53:54 | 2024-07-01T11:25:40 | 2024-07-01T11:25:40 | ramtunguturi36 | [] | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libr... | false |
2,353,531,912 | https://api.github.com/repos/huggingface/datasets/issues/6972 | https://github.com/huggingface/datasets/pull/6972 | 6,972 | Fix webdataset pickling | closed | 2 | 2024-06-14T14:43:02 | 2024-06-14T15:43:43 | 2024-06-14T15:37:35 | lhoestq | [] | ...by making tracked iterables picklable.
This is important to make streaming datasets compatible with multiprocessing e.g. for parallel data loading | true |
2,351,830,856 | https://api.github.com/repos/huggingface/datasets/issues/6971 | https://github.com/huggingface/datasets/pull/6971 | 6,971 | packaging: Remove useless dependencies | closed | 4 | 2024-06-13T18:43:43 | 2024-06-14T14:03:34 | 2024-06-14T13:57:24 | daskol | [] | Revert changes in #6396 and #6404. CVE-2023-47248 has been fixed since PyArrow v14.0.1. Meanwhile Python requirements requires `pyarrow>=15.0.0`. | true |
2,351,380,029 | https://api.github.com/repos/huggingface/datasets/issues/6970 | https://github.com/huggingface/datasets/pull/6970 | 6,970 | Set dev version | closed | 2 | 2024-06-13T14:59:45 | 2024-06-13T15:06:18 | 2024-06-13T14:59:56 | albertvillanova | [] | null | true |
2,351,351,436 | https://api.github.com/repos/huggingface/datasets/issues/6969 | https://github.com/huggingface/datasets/pull/6969 | 6,969 | Release: 2.20.0 | closed | 2 | 2024-06-13T14:48:20 | 2024-06-13T15:04:39 | 2024-06-13T14:55:53 | albertvillanova | [] | null | true |
2,351,331,417 | https://api.github.com/repos/huggingface/datasets/issues/6968 | https://github.com/huggingface/datasets/pull/6968 | 6,968 | Use `HF_HUB_OFFLINE` instead of `HF_DATASETS_OFFLINE` | closed | 3 | 2024-06-13T14:39:40 | 2024-06-13T17:31:37 | 2024-06-13T17:25:37 | Wauplin | [] | To use `datasets` offline, one can use the `HF_DATASETS_OFFLINE` environment variable. This PR makes `HF_HUB_OFFLINE` the recommended environment variable for offline training. Goal is to be more consistent with the rest of HF ecosystem and have a single config value to set.
The changes are backward-compatible meani... | true |
2,349,146,398 | https://api.github.com/repos/huggingface/datasets/issues/6967 | https://github.com/huggingface/datasets/issues/6967 | 6,967 | Method to load Laion400m | open | 0 | 2024-06-12T16:04:04 | 2024-06-12T16:04:04 | null | humanely | [
"enhancement"
] | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings... | false |
2,348,934,466 | https://api.github.com/repos/huggingface/datasets/issues/6966 | https://github.com/huggingface/datasets/pull/6966 | 6,966 | Remove underlines between badges | closed | 1 | 2024-06-12T14:32:11 | 2024-06-19T14:16:21 | 2024-06-19T14:10:11 | andrewhong04 | [] | ## Before:
<img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac">
## After:
<img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2">
| true |
2,348,653,895 | https://api.github.com/repos/huggingface/datasets/issues/6965 | https://github.com/huggingface/datasets/pull/6965 | 6,965 | Improve skip take shuffling and distributed | closed | 2 | 2024-06-12T12:30:27 | 2024-06-24T15:22:21 | 2024-06-24T15:16:16 | lhoestq | [] | set the right behavior of skip/take depending on whether it's called after or before shuffle/split_by_node | true |
2,344,973,229 | https://api.github.com/repos/huggingface/datasets/issues/6964 | https://github.com/huggingface/datasets/pull/6964 | 6,964 | Fix resuming arrow format | closed | 2 | 2024-06-10T22:40:33 | 2024-06-14T15:04:49 | 2024-06-14T14:58:37 | lhoestq | [] | following https://github.com/huggingface/datasets/pull/6658 | true |
2,344,269,477 | https://api.github.com/repos/huggingface/datasets/issues/6963 | https://github.com/huggingface/datasets/pull/6963 | 6,963 | [Streaming] retry on requests errors | closed | 3 | 2024-06-10T15:51:56 | 2024-06-28T09:53:11 | 2024-06-28T09:46:52 | lhoestq | [] | reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader
cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) ... | true |
2,343,394,378 | https://api.github.com/repos/huggingface/datasets/issues/6962 | https://github.com/huggingface/datasets/pull/6962 | 6,962 | fix(ci): remove unnecessary permissions | closed | 2 | 2024-06-10T09:28:02 | 2024-06-11T08:31:52 | 2024-06-11T08:25:47 | McPatate | [] | ### What does this PR do?
Remove unnecessary permissions granted to the actions workflow.
Sorry for the mishap. | true |
2,342,022,418 | https://api.github.com/repos/huggingface/datasets/issues/6961 | https://github.com/huggingface/datasets/issues/6961 | 6,961 | Manual downloads should count as downloads | open | 1 | 2024-06-09T04:52:06 | 2024-06-13T16:05:00 | null | umarbutler | [
"enhancement"
] | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | false |
2,340,791,685 | https://api.github.com/repos/huggingface/datasets/issues/6960 | https://github.com/huggingface/datasets/pull/6960 | 6,960 | feat(ci): add trufflehog secrets detection | closed | 3 | 2024-06-07T16:18:23 | 2024-06-08T14:58:27 | 2024-06-08T14:52:18 | McPatate | [] | ### What does this PR do?
Adding a GH action to scan for leaked secrets on each commit.
| true |
2,340,229,908 | https://api.github.com/repos/huggingface/datasets/issues/6959 | https://github.com/huggingface/datasets/pull/6959 | 6,959 | Better error handling in `dataset_module_factory` | closed | 3 | 2024-06-07T11:24:15 | 2024-06-10T07:33:53 | 2024-06-10T07:27:43 | Wauplin | [] | cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link)
This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed:
1. Use multiple `except ... as e` instead of using `isinstance(e, ...)`
2. Alway... | true |
2,337,476,383 | https://api.github.com/repos/huggingface/datasets/issues/6958 | https://github.com/huggingface/datasets/issues/6958 | 6,958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | closed | 8 | 2024-06-06T06:52:19 | 2024-07-01T11:27:46 | 2024-07-01T11:27:46 | wangguan1995 | [] | ### Describe the bug
```
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory
raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg)
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on t... | false |
2,335,559,400 | https://api.github.com/repos/huggingface/datasets/issues/6957 | https://github.com/huggingface/datasets/pull/6957 | 6,957 | Fix typos in docs | closed | 2 | 2024-06-05T10:46:47 | 2024-06-05T13:01:07 | 2024-06-05T12:43:26 | albertvillanova | [] | Fix typos in docs introduced by:
- #6956
Typos:
- `comparisions` => `comparisons`
- two consecutive sentences both ending in colon
- split one sentence into two
Sorry, I did not have time to review that PR.
CC: @lhoestq | true |
2,333,940,021 | https://api.github.com/repos/huggingface/datasets/issues/6956 | https://github.com/huggingface/datasets/pull/6956 | 6,956 | update docs on N-dim arrays | closed | 2 | 2024-06-04T16:32:19 | 2024-06-04T16:46:34 | 2024-06-04T16:40:27 | lhoestq | [] | null | true |
2,333,802,815 | https://api.github.com/repos/huggingface/datasets/issues/6955 | https://github.com/huggingface/datasets/pull/6955 | 6,955 | Fix small typo | closed | 1 | 2024-06-04T15:19:02 | 2024-06-05T10:18:56 | 2024-06-04T15:20:55 | marcenacp | [] | null | true |
2,333,530,558 | https://api.github.com/repos/huggingface/datasets/issues/6954 | https://github.com/huggingface/datasets/pull/6954 | 6,954 | Remove default `trust_remote_code=True` | closed | 6 | 2024-06-04T13:22:56 | 2024-06-17T16:32:24 | 2024-06-07T12:20:29 | lhoestq | [] | TODO:
- [x] fix tests | true |
2,333,366,120 | https://api.github.com/repos/huggingface/datasets/issues/6953 | https://github.com/huggingface/datasets/issues/6953 | 6,953 | Remove canonical datasets from docs | closed | 1 | 2024-06-04T12:09:03 | 2024-07-01T11:31:25 | 2024-07-01T11:31:25 | albertvillanova | [
"documentation"
] | Remove canonical datasets from docs, now that we no longer have canonical datasets. | false |
2,333,320,411 | https://api.github.com/repos/huggingface/datasets/issues/6952 | https://github.com/huggingface/datasets/pull/6952 | 6,952 | Move info_utils errors to exceptions module | closed | 2 | 2024-06-04T11:48:32 | 2024-06-10T14:09:59 | 2024-06-10T14:03:55 | albertvillanova | [] | Move `info_utils` errors to `exceptions` module.
Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones). | true |
2,333,231,042 | https://api.github.com/repos/huggingface/datasets/issues/6951 | https://github.com/huggingface/datasets/issues/6951 | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | closed | 5 | 2024-06-04T11:02:33 | 2024-11-26T08:32:18 | 2024-07-01T11:33:10 | windmaple | [
"enhancement"
] | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | false |
2,333,005,974 | https://api.github.com/repos/huggingface/datasets/issues/6950 | https://github.com/huggingface/datasets/issues/6950 | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | closed | 2 | 2024-06-04T09:18:32 | 2024-06-25T08:05:49 | 2024-06-25T08:05:49 | iansheng | [
"documentation"
] | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | false |
2,332,336,573 | https://api.github.com/repos/huggingface/datasets/issues/6949 | https://github.com/huggingface/datasets/issues/6949 | 6,949 | load_dataset error | closed | 2 | 2024-06-04T01:24:45 | 2024-07-01T11:33:46 | 2024-07-01T11:33:46 | frederichen01 | [] | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | false |
2,331,758,300 | https://api.github.com/repos/huggingface/datasets/issues/6948 | https://github.com/huggingface/datasets/issues/6948 | 6,948 | to_tf_dataset: Visible devices cannot be modified after being initialized | open | 0 | 2024-06-03T18:10:57 | 2024-06-03T18:10:57 | null | logasja | [] | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _b... | false |
2,331,114,055 | https://api.github.com/repos/huggingface/datasets/issues/6947 | https://github.com/huggingface/datasets/issues/6947 | 6,947 | FileNotFoundError:error when loading C4 dataset | closed | 15 | 2024-06-03T13:06:33 | 2024-06-25T06:21:28 | 2024-06-25T06:21:28 | W-215 | [] | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | false |
2,330,276,848 | https://api.github.com/repos/huggingface/datasets/issues/6946 | https://github.com/huggingface/datasets/pull/6946 | 6,946 | Re-enable import sorting disabled by flake8:noqa directive when using ruff linter | closed | 2 | 2024-06-03T06:24:47 | 2024-06-04T10:00:08 | 2024-06-04T09:54:23 | albertvillanova | [] | Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR:
- #5519
Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR:
- #6619
That replacement was wrong because we kept the `is... | true |
2,330,224,869 | https://api.github.com/repos/huggingface/datasets/issues/6945 | https://github.com/huggingface/datasets/pull/6945 | 6,945 | Update yanked version of minimum requests requirement | closed | 5 | 2024-06-03T05:45:50 | 2024-06-18T07:36:15 | 2024-06-03T06:09:43 | albertvillanova | [] | Update yanked version of minimum requests requirement.
Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/ | true |
2,330,207,120 | https://api.github.com/repos/huggingface/datasets/issues/6944 | https://github.com/huggingface/datasets/pull/6944 | 6,944 | Set dev version | closed | 2 | 2024-06-03T05:29:59 | 2024-06-03T05:37:51 | 2024-06-03T05:31:47 | albertvillanova | [] | null | true |
2,330,176,890 | https://api.github.com/repos/huggingface/datasets/issues/6943 | https://github.com/huggingface/datasets/pull/6943 | 6,943 | Release 2.19.2 | closed | 1 | 2024-06-03T05:01:50 | 2024-06-03T05:17:41 | 2024-06-03T05:17:40 | albertvillanova | [] | null | true |
2,329,562,382 | https://api.github.com/repos/huggingface/datasets/issues/6942 | https://github.com/huggingface/datasets/issues/6942 | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | closed | 0 | 2024-06-02T09:43:34 | 2024-06-04T09:54:24 | 2024-06-04T09:54:24 | albertvillanova | [
"maintenance"
] | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | false |
2,328,930,165 | https://api.github.com/repos/huggingface/datasets/issues/6941 | https://github.com/huggingface/datasets/issues/6941 | 6,941 | Supporting FFCV: Fast Forward Computer Vision | open | 0 | 2024-06-01T05:34:52 | 2024-06-01T05:34:52 | null | Luciennnnnnn | [
"enhancement"
] | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | false |
2,328,637,831 | https://api.github.com/repos/huggingface/datasets/issues/6940 | https://github.com/huggingface/datasets/issues/6940 | 6,940 | Enable Sharding to Equal Sized Shards | open | 0 | 2024-05-31T21:55:50 | 2024-06-01T07:34:12 | null | yuvalkirstain | [
"enhancement"
] | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | false |
2,328,059,386 | https://api.github.com/repos/huggingface/datasets/issues/6939 | https://github.com/huggingface/datasets/issues/6939 | 6,939 | ExpectedMoreSplits error when using data_dir | closed | 0 | 2024-05-31T15:08:42 | 2024-05-31T17:10:39 | 2024-05-31T17:10:39 | albertvillanova | [
"bug"
] | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
F... | false |
2,327,568,281 | https://api.github.com/repos/huggingface/datasets/issues/6938 | https://github.com/huggingface/datasets/pull/6938 | 6,938 | Fix expected splits when passing data_files or dir | closed | 2 | 2024-05-31T11:04:22 | 2024-05-31T15:28:03 | 2024-05-31T15:28:02 | lhoestq | [] | reported on slack:
The following code snippet gives an error with v2.19 but not with v2.18:
from datasets import load_dataset
```
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
and the error is:
```
Traceback (most recent ... | true |
2,327,212,611 | https://api.github.com/repos/huggingface/datasets/issues/6937 | https://github.com/huggingface/datasets/issues/6937 | 6,937 | JSON loader implicitly coerces floats to integers | open | 1 | 2024-05-31T08:09:12 | 2025-06-24T05:49:20 | null | albertvillanova | [
"bug"
] | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===========================... | false |
2,326,119,853 | https://api.github.com/repos/huggingface/datasets/issues/6936 | https://github.com/huggingface/datasets/issues/6936 | 6,936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | open | 3 | 2024-05-30T16:48:39 | 2025-02-06T22:12:52 | null | ycattan | [] | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), e... | false |
2,325,612,022 | https://api.github.com/repos/huggingface/datasets/issues/6935 | https://github.com/huggingface/datasets/issues/6935 | 6,935 | Support for pathlib.Path in datasets 2.19.0 | open | 2 | 2024-05-30T12:53:36 | 2025-01-14T11:50:22 | null | lamyiowce | [] | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets impor... | false |
2,325,341,717 | https://api.github.com/repos/huggingface/datasets/issues/6934 | https://github.com/huggingface/datasets/pull/6934 | 6,934 | Revert ci user | closed | 3 | 2024-05-30T10:45:26 | 2024-05-31T10:25:08 | 2024-05-30T10:45:37 | lhoestq | [] | null | true |
2,325,300,800 | https://api.github.com/repos/huggingface/datasets/issues/6933 | https://github.com/huggingface/datasets/pull/6933 | 6,933 | update ci user | closed | 2 | 2024-05-30T10:23:02 | 2024-05-30T10:30:54 | 2024-05-30T10:23:12 | lhoestq | [] | token is ok to be public since it's only for the hub-ci | true |
2,324,729,267 | https://api.github.com/repos/huggingface/datasets/issues/6932 | https://github.com/huggingface/datasets/pull/6932 | 6,932 | Update dataset_dict.py | closed | 2 | 2024-05-30T05:22:35 | 2024-06-04T12:56:20 | 2024-06-04T12:50:13 | Arunprakash-A | [] | shape returns (number of rows, number of columns) | true |
2,323,457,525 | https://api.github.com/repos/huggingface/datasets/issues/6931 | https://github.com/huggingface/datasets/pull/6931 | 6,931 | [WebDataset] Support compressed files | closed | 2 | 2024-05-29T14:19:06 | 2024-05-29T16:33:18 | 2024-05-29T16:24:21 | lhoestq | [] | null | true |
2,323,225,922 | https://api.github.com/repos/huggingface/datasets/issues/6930 | https://github.com/huggingface/datasets/issues/6930 | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | open | 2 | 2024-05-29T12:40:05 | 2024-07-23T06:25:24 | null | Polarisamoon | [] | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | false |
2,322,980,077 | https://api.github.com/repos/huggingface/datasets/issues/6929 | https://github.com/huggingface/datasets/issues/6929 | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | open | 2 | 2024-05-29T10:36:06 | 2024-05-29T20:51:56 | null | zinc75 | [
"enhancement"
] | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | false |
2,322,267,727 | https://api.github.com/repos/huggingface/datasets/issues/6928 | https://github.com/huggingface/datasets/pull/6928 | 6,928 | Update process.mdx: Code Listings Fixes | closed | 1 | 2024-05-29T03:17:07 | 2024-06-04T13:08:19 | 2024-06-04T12:55:00 | FadyMorris | [] | null | true |
2,322,260,725 | https://api.github.com/repos/huggingface/datasets/issues/6927 | https://github.com/huggingface/datasets/pull/6927 | 6,927 | Update process.mdx: Minor Code Listings Updates and Fixes | closed | 0 | 2024-05-29T03:09:01 | 2024-05-29T03:12:46 | 2024-05-29T03:12:46 | FadyMorris | [] | null | true |
2,322,164,287 | https://api.github.com/repos/huggingface/datasets/issues/6926 | https://github.com/huggingface/datasets/pull/6926 | 6,926 | Update process.mdx: Fix code listing in Shard section | closed | 0 | 2024-05-29T01:25:55 | 2024-05-29T03:11:20 | 2024-05-29T03:11:08 | FadyMorris | [] | null | true |
2,321,084,967 | https://api.github.com/repos/huggingface/datasets/issues/6925 | https://github.com/huggingface/datasets/pull/6925 | 6,925 | Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets | closed | 6 | 2024-05-28T13:33:38 | 2024-11-07T20:41:58 | 2024-05-31T17:10:37 | albertvillanova | [] | Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes:
- `data_dir`
- `data_files`
The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases.
Additionally, also if the user passes `revision` other than "main" (so that ... | true |
2,320,531,015 | https://api.github.com/repos/huggingface/datasets/issues/6924 | https://github.com/huggingface/datasets/issues/6924 | 6,924 | Caching map result of DatasetDict. | open | 3 | 2024-05-28T09:07:41 | 2025-07-28T12:57:34 | null | MostHumble | [] | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/... | false |
2,319,292,872 | https://api.github.com/repos/huggingface/datasets/issues/6923 | https://github.com/huggingface/datasets/issues/6923 | 6,923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | open | 0 | 2024-05-27T14:27:57 | 2024-05-27T14:27:57 | null | anioji | [] | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](... | false |
2,315,322,738 | https://api.github.com/repos/huggingface/datasets/issues/6918 | https://github.com/huggingface/datasets/issues/6918 | 6,918 | NonMatchingSplitsSizesError when using data_dir | closed | 2 | 2024-05-24T12:43:39 | 2024-05-31T17:10:38 | 2024-05-31T17:10:38 | srehaag | [
"bug"
] | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on t... | false |
2,314,683,663 | https://api.github.com/repos/huggingface/datasets/issues/6917 | https://github.com/huggingface/datasets/issues/6917 | 6,917 | WinError 32 The process cannot access the file during load_dataset | open | 0 | 2024-05-24T07:54:51 | 2024-05-24T07:54:51 | null | elwe-2808 | [] | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "tran... | false |
2,311,675,564 | https://api.github.com/repos/huggingface/datasets/issues/6916 | https://github.com/huggingface/datasets/issues/6916 | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | closed | 0 | 2024-05-22T23:52:15 | 2024-05-23T00:07:53 | 2024-05-23T00:07:53 | jetlime | [] | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | false |
2,310,564,961 | https://api.github.com/repos/huggingface/datasets/issues/6915 | https://github.com/huggingface/datasets/pull/6915 | 6,915 | Validate config name and data_files in packaged modules | closed | 5 | 2024-05-22T13:36:33 | 2024-06-06T09:32:10 | 2024-06-06T09:24:35 | albertvillanova | [] | Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method.
Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21e... | true |
2,310,107,326 | https://api.github.com/repos/huggingface/datasets/issues/6914 | https://github.com/huggingface/datasets/pull/6914 | 6,914 | Preserve JSON column order and support list of strings field | closed | 2 | 2024-05-22T09:58:54 | 2024-05-29T13:18:47 | 2024-05-29T13:12:23 | albertvillanova | [] | Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts).
Additionally, support JSON file with a list of strings field.
Fix #6913. | true |
2,309,605,889 | https://api.github.com/repos/huggingface/datasets/issues/6913 | https://github.com/huggingface/datasets/issues/6913 | 6,913 | Column order is nondeterministic when loading from JSON | closed | 0 | 2024-05-22T05:30:14 | 2024-05-29T13:12:24 | 2024-05-29T13:12:24 | albertvillanova | [
"bug"
] | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have column... | false |
2,309,365,961 | https://api.github.com/repos/huggingface/datasets/issues/6912 | https://github.com/huggingface/datasets/issues/6912 | 6,912 | Add MedImg for streaming | open | 8 | 2024-05-22T00:55:30 | 2024-09-05T16:53:54 | null | lhallee | [
"dataset request"
] | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your con... | false |
2,308,152,711 | https://api.github.com/repos/huggingface/datasets/issues/6911 | https://github.com/huggingface/datasets/pull/6911 | 6,911 | Remove dead code for non-dict data_files from packaged modules | closed | 2 | 2024-05-21T12:10:24 | 2024-05-23T08:05:58 | 2024-05-23T07:59:57 | albertvillanova | [] | Remove dead code for non-dict data_files from packaged modules.
Since the merge of this PR:
- #2986
the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code. | true |
2,307,570,084 | https://api.github.com/repos/huggingface/datasets/issues/6910 | https://github.com/huggingface/datasets/pull/6910 | 6,910 | Fix wrong type hints in data_files | closed | 2 | 2024-05-21T07:41:09 | 2024-05-23T06:04:05 | 2024-05-23T05:58:05 | albertvillanova | [] | Fix wrong type hints in data_files introduced in:
- #6493 | true |
2,307,508,120 | https://api.github.com/repos/huggingface/datasets/issues/6909 | https://github.com/huggingface/datasets/pull/6909 | 6,909 | Update requests >=2.32.1 to fix vulnerability | closed | 2 | 2024-05-21T07:11:20 | 2024-05-21T07:45:58 | 2024-05-21T07:38:25 | albertvillanova | [] | Update requests >=2.32.1 to fix vulnerability. | true |
2,304,958,116 | https://api.github.com/repos/huggingface/datasets/issues/6908 | https://github.com/huggingface/datasets/issues/6908 | 6,908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | closed | 2 | 2024-05-20T02:43:59 | 2024-05-24T10:58:09 | 2024-05-24T10:58:09 | guch8017 | [] | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
... | false |
2,303,855,833 | https://api.github.com/repos/huggingface/datasets/issues/6907 | https://github.com/huggingface/datasets/issues/6907 | 6,907 | Support the deserialization of json lines files comprised of lists | open | 1 | 2024-05-18T05:07:23 | 2024-05-18T08:53:28 | null | umarbutler | [
"enhancement"
] | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a v... | false |
2,303,679,119 | https://api.github.com/repos/huggingface/datasets/issues/6906 | https://github.com/huggingface/datasets/issues/6906 | 6,906 | irc_disentangle - Issue with splitting data | closed | 6 | 2024-05-17T23:19:37 | 2024-07-16T00:21:56 | 2024-07-08T06:18:08 | eor51355 | [] | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
#... | false |
2,303,098,587 | https://api.github.com/repos/huggingface/datasets/issues/6905 | https://github.com/huggingface/datasets/issues/6905 | 6,905 | Extraction protocol for arrow files is not defined | closed | 1 | 2024-05-17T16:01:41 | 2025-02-06T19:50:22 | 2025-02-06T19:50:20 | radulescupetru | [] | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_ut... | false |
2,302,912,179 | https://api.github.com/repos/huggingface/datasets/issues/6904 | https://github.com/huggingface/datasets/pull/6904 | 6,904 | Fix decoding multi part extension | closed | 3 | 2024-05-17T14:32:57 | 2024-05-17T14:52:56 | 2024-05-17T14:46:54 | lhoestq | [] | e.g. a field named `url.txt` should be a treated as text
I also included a small fix to support .npz correctly | true |
2,300,436,053 | https://api.github.com/repos/huggingface/datasets/issues/6903 | https://github.com/huggingface/datasets/issues/6903 | 6,903 | Add the option of saving in parquet instead of arrow | open | 18 | 2024-05-16T13:35:51 | 2025-05-19T12:14:14 | null | arita37 | [
"enhancement"
] | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big... | false |
2,300,256,241 | https://api.github.com/repos/huggingface/datasets/issues/6902 | https://github.com/huggingface/datasets/pull/6902 | 6,902 | Make CLI convert_to_parquet not raise error if no rights to create script branch | closed | 2 | 2024-05-16T12:21:27 | 2024-06-03T04:43:17 | 2024-05-16T12:51:05 | albertvillanova | [] | Make CLI convert_to_parquet not raise error if no rights to create "script" branch.
Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed.
Fix #6901.
Bug introduced in datasets-2.19.0 by:
- #6809 | true |
2,300,167,465 | https://api.github.com/repos/huggingface/datasets/issues/6901 | https://github.com/huggingface/datasets/issues/6901 | 6,901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | closed | 0 | 2024-05-16T11:40:22 | 2024-05-16T12:51:06 | 2024-05-16T12:51:06 | albertvillanova | [
"bug"
] | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/ut... | false |
2,298,489,733 | https://api.github.com/repos/huggingface/datasets/issues/6900 | https://github.com/huggingface/datasets/issues/6900 | 6,900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | closed | 5 | 2024-05-15T17:48:34 | 2024-06-28T09:30:13 | 2024-06-28T09:30:13 | lhoestq | [] | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["_... | false |
2,298,059,597 | https://api.github.com/repos/huggingface/datasets/issues/6899 | https://github.com/huggingface/datasets/issues/6899 | 6,899 | List of dictionary features get standardized | open | 2 | 2024-05-15T14:11:35 | 2025-04-01T20:48:03 | null | sohamparikh | [] | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | false |
2,294,432,108 | https://api.github.com/repos/huggingface/datasets/issues/6898 | https://github.com/huggingface/datasets/pull/6898 | 6,898 | Fix YAML error in README files appearing on GitHub | closed | 3 | 2024-05-14T05:21:57 | 2024-05-16T14:36:57 | 2024-05-16T14:28:16 | albertvillanova | [] | Fix YAML error in README files appearing on GitHub.
See error message:

Fix #6897. | true |
2,293,428,243 | https://api.github.com/repos/huggingface/datasets/issues/6897 | https://github.com/huggingface/datasets/issues/6897 | 6,897 | datasets template guide :: issue in documentation YAML | closed | 2 | 2024-05-13T17:33:59 | 2024-05-16T14:28:17 | 2024-05-16T14:28:17 | bghira | [] | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the ... | false |
2,293,176,061 | https://api.github.com/repos/huggingface/datasets/issues/6896 | https://github.com/huggingface/datasets/issues/6896 | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | open | 1 | 2024-05-13T15:41:57 | 2025-03-25T01:21:06 | null | finiteautomata | [] | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipyth... | false |
2,292,993,156 | https://api.github.com/repos/huggingface/datasets/issues/6895 | https://github.com/huggingface/datasets/pull/6895 | 6,895 | Document that to_json defaults to JSON Lines | closed | 2 | 2024-05-13T14:22:34 | 2024-05-16T14:37:25 | 2024-05-16T14:31:26 | albertvillanova | [] | Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring.
Fix #6894. | true |
2,292,840,226 | https://api.github.com/repos/huggingface/datasets/issues/6894 | https://github.com/huggingface/datasets/issues/6894 | 6,894 | Better document defaults of to_json | closed | 0 | 2024-05-13T13:30:54 | 2024-05-16T14:31:27 | 2024-05-16T14:31:27 | albertvillanova | [
"documentation"
] | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | false |
2,292,677,439 | https://api.github.com/repos/huggingface/datasets/issues/6893 | https://github.com/huggingface/datasets/pull/6893 | 6,893 | Close gzipped files properly | closed | 3 | 2024-05-13T12:24:39 | 2024-05-13T13:53:17 | 2024-05-13T13:01:54 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/6877 | true |
2,291,201,347 | https://api.github.com/repos/huggingface/datasets/issues/6892 | https://github.com/huggingface/datasets/pull/6892 | 6,892 | Add support for categorical/dictionary types | closed | 3 | 2024-05-12T07:15:08 | 2024-06-07T15:01:39 | 2024-06-07T12:20:42 | EthanSteinberg | [] | Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column.
Unfortunately, huggingface datasets currently does n... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.