Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.2B
1.82B
node_id
stringlengths
18
19
number
int64
4.13k
6.08k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6080/comments
https://api.github.com/repos/huggingface/datasets/issues/6080/events
https://github.com/huggingface/datasets/pull/6080
1,822,667,554
PR_kwDODunzps5WdL4K
6,080
Remove README link to deprecated Colab notebook
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-07-26T15:27:49
2023-07-26T16:24:43
2023-07-26T16:14:34
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6080", "html_url": "https://github.com/huggingface/datasets/pull/6080", "diff_url": "https://github.com/huggingface/datasets/pull/6080.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6080.patch", "merged_at": "2023-07-26T16:14:34" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6080/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6079/comments
https://api.github.com/repos/huggingface/datasets/issues/6079/events
https://github.com/huggingface/datasets/issues/6079
1,822,597,471
I_kwDODunzps5soqFf
6,079
Iterating over DataLoader based on HF datasets is stuck forever
{ "login": "arindamsarkar93", "id": 5454868, "node_id": "MDQ6VXNlcjU0NTQ4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arindamsarkar93", "html_url": "https://github.com/arindamsarkar93", "followers_url": "https://api.github.com/users/arindamsarkar93/followers", "following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}", "gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}", "starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions", "organizations_url": "https://api.github.com/users/arindamsarkar93/orgs", "repos_url": "https://api.github.com/users/arindamsarkar93/repos", "events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}", "received_events_url": "https://api.github.com/users/arindamsarkar93/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ", "Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.t...
2023-07-26T14:52:37
2023-07-26T19:14:16
null
NONE
null
null
null
### Describe the bug I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment. I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here? ### Steps to reproduce the bug ``` train_dataset = load_dataset( "parquet", data_files = {'train': tr_data_path + '*.parquet'}, split = 'train', collate_fn = streaming_data_collate_fn, streaming = True ).with_format('torch') train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0) t = time.time() iter_ = 0 for batch in train_dataloader: iter_ += 1 if iter_ == 1000: break print (time.time() - t) ``` ### Expected behavior The snippet should work normally and load the next batch of data. ### Environment info datasets: '2.14.0' pyarrow: '12.0.0' torch: '2.0.0' Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] !uname -r 5.10.178-162.673.amzn2.x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6079/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
{ "login": "NicolasMICAUX", "id": 72763959, "node_id": "MDQ6VXNlcjcyNzYzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NicolasMICAUX", "html_url": "https://github.com/NicolasMICAUX", "followers_url": "https://api.github.com/users/NicolasMICAUX/followers", "following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}", "gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}", "starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions", "organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs", "repos_url": "https://api.github.com/users/NicolasMICAUX/repos", "events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}", "received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ", "Ok thank you for your answer" ]
2023-07-26T14:08:22
2023-07-26T21:10:40
null
NONE
null
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
https://api.github.com/repos/huggingface/datasets/issues/6077/events
https://github.com/huggingface/datasets/issues/6077
1,822,486,810
I_kwDODunzps5soPEa
6,077
Mapping gets stuck at 99%
{ "login": "Laurent2916", "id": 21087104, "node_id": "MDQ6VXNlcjIxMDg3MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laurent2916", "html_url": "https://github.com/Laurent2916", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "repos_url": "https://api.github.com/users/Laurent2916/repos", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The `MAX_MAP_BATCH_SIZE = 1_000_000_000` hack is bad as it loads the entire dataset into RAM when performing `.map`. Instead, it's best to use `.iter(batch_size)` to iterate over the data batches and compute `mean` for each column. (`stddev` can be computed in another pass).\r\n\r\nAlso, these arrays are big, so i...
2023-07-26T14:00:40
2023-07-26T18:29:10
null
CONTRIBUTOR
null
null
null
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset. The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why. Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me. ### Steps to reproduce the bug I'm able to reproduce the problem using the following scripts: ```python # random_data.py import datasets import torch _VERSION = "1.0.0" class RandomDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( version=_VERSION, supervised_keys=None, features=datasets.Features( { "positions": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "normals": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "features": datasets.Array2D( shape=(30000, 6), dtype="float32", ), "scalars": datasets.Sequence( feature=datasets.Value("float32"), length=20, ), }, ), ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # type: ignore gen_kwargs={"nb_samples": 1000}, ), datasets.SplitGenerator( name=datasets.Split.TEST, # type: ignore gen_kwargs={"nb_samples": 100}, ), ] def _generate_examples(self, nb_samples: int): for idx in range(nb_samples): yield idx, { "positions": torch.rand(30000, 3), "normals": torch.rand(30000, 3), "features": torch.rand(30000, 6), "scalars": torch.rand(20), } ``` ```python # main.py import datasets import torch def compute_mean_std( dataset: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Compute the mean and standard deviation of each feature of the dataset. Args: dataset (`Dataset`): A huggingface dataset. Returns: dict: A dictionary containing the mean and standard deviation of each feature. """ result = {} for key in dataset: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # reshape data, from (a, ..., b, c) -> (*, c) data = data.reshape(-1, data.shape[-1]) # compute mean and std mean = data.mean(dim=0) # (c) std = data.std(dim=0) # (c) # store in result result[key] = torch.stack((mean, std)) return result def apply_mean_std( dataset: datasets.Dataset, mean_std: datasets.Dataset, ) -> dict[str, torch.Tensor]: """Normalize the dataset using the mean and standard deviation of each feature. Args: dataset (`Dataset`): A huggingface dataset. mean_std (`Dataset`): A huggingface dataset containing the mean and standard deviation of each feature. Returns: dict: A dictionary containing the normalized dataset. """ result = {} for key in mean_std.column_names: # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # extract mean and std from dict mean = mean_std[key][0] # type: ignore std = mean_std[key][1] # type: ignore # normalize data normalized_data = (data - mean) / std result[key] = normalized_data return result # hack to force the map function to use the entire dataset MAX_MAP_BATCH_SIZE = 1_000_000_000 # get dataset ds = datasets.load_dataset( path="random_data.py", split="train", ).with_format("torch") # compute mean/std of each feature mean_std = ds.map( desc="Computing mean/std", # type: ignore remove_columns=ds.column_names, # type: ignore function=compute_mean_std, batch_size=MAX_MAP_BATCH_SIZE, batched=True, ) # normalize each feature of the dataset ds_normalized = ds.map( desc="Applying mean/std", # type: ignore function=apply_mean_std, batched=False, fn_kwargs={ "mean_std": mean_std, }, ) ``` ### Expected behavior Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6076/comments
https://api.github.com/repos/huggingface/datasets/issues/6076/events
https://github.com/huggingface/datasets/pull/6076
1,822,345,597
PR_kwDODunzps5WcGVR
6,076
No gzip encoding from github
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6076). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-07-26T12:46:07
2023-07-26T14:01:21
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6076", "html_url": "https://github.com/huggingface/datasets/pull/6076", "diff_url": "https://github.com/huggingface/datasets/pull/6076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6076.patch", "merged_at": null }
Don't accept gzip encoding from github, otherwise some files are not streamable + seekable. fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84 and making sure https://github.com/huggingface/datasets/issues/2918 works as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6076/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
https://api.github.com/repos/huggingface/datasets/issues/6075/events
https://github.com/huggingface/datasets/issues/6075
1,822,341,398
I_kwDODunzps5snrkW
6,075
Error loading music files using `load_dataset`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.", "I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!" ]
2023-07-26T12:44:05
2023-07-26T13:08:08
2023-07-26T13:08:08
NONE
null
null
null
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem formatted_output = format_table( File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table return formatter(pa_table, query_type=query_type) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__ return self.format_column(pa_table) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column return self.features.decode_column(column, column_name) if self.features else column File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read with SoundFile(file, 'r', samplerate, channels, File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__ self._file = self._open(file, mode_int, closefd) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open _error_check(_snd.sf_error(file_ptr), File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised. ``` ### Steps to reproduce the bug Code to reproduce the error - ```python from datasets import load_dataset ds = load_dataset("susnato/pop2piano_real_music_test", split="test") print(ds[0]) ``` ### Expected behavior I should be able to read the music file without any error. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-07-26T12:20:54
2023-07-26T14:42:56
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074", "html_url": "https://github.com/huggingface/datasets/pull/6074", "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "merged_at": null }
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
https://api.github.com/repos/huggingface/datasets/issues/6073/events
https://github.com/huggingface/datasets/issues/6073
1,822,167,804
I_kwDODunzps5snBL8
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
{ "login": "BUAAChuanWang", "id": 45893496, "node_id": "MDQ6VXNlcjQ1ODkzNDk2", "avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BUAAChuanWang", "html_url": "https://github.com/BUAAChuanWang", "followers_url": "https://api.github.com/users/BUAAChuanWang/followers", "following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}", "gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions", "organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs", "repos_url": "https://api.github.com/users/BUAAChuanWang/repos", "events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}", "received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)." ]
2023-07-26T11:09:31
2023-07-26T12:34:45
null
NONE
null
null
null
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2 So there maybe a bug in path join. Here is the whole bug report: /x/datasets/loa │ │ d.py:1656 in load_dataset │ │ │ │ 1653 │ ignore_verifications = ignore_verifications or save_infos │ │ 1654 │ │ │ 1655 │ # Create a dataset builder │ │ ❱ 1656 │ builder_instance = load_dataset_builder( │ │ 1657 │ │ path=path, │ │ 1658 │ │ name=name, │ │ 1659 │ │ data_dir=data_dir, │ │ │ │ x/datasets/loa │ │ d.py:1439 in load_dataset_builder │ │ │ │ 1436 │ if use_auth_token is not None: │ │ 1437 │ │ download_config = download_config.copy() if download_config e │ │ 1438 │ │ download_config.use_auth_token = use_auth_token │ │ ❱ 1439 │ dataset_module = dataset_module_factory( │ │ 1440 │ │ path, │ │ 1441 │ │ revision=revision, │ │ 1442 │ │ download_config=download_config, │ │ │ │ x/datasets/loa │ │ d.py:1097 in dataset_module_factory │ │ │ │ 1094 │ │ │ 1095 │ # Try packaged │ │ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │ │ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │ │ 1098 │ │ │ path, │ │ 1099 │ │ │ data_dir=data_dir, │ │ 1100 │ │ │ data_files=data_files, │ │ │ │x/datasets/loa │ │ d.py:743 in get_module │ │ │ │ 740 │ │ │ if self.data_dir is not None │ │ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │ │ 742 │ │ ) │ │ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │ │ 744 │ │ │ patterns, │ │ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │ │ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │ │ │ │ x/datasets/dat │ │ a_files.py:590 in from_local_or_remote │ │ │ │ 587 │ │ out = cls() │ │ 588 │ │ for key, patterns_for_key in patterns.items(): │ │ 589 │ │ │ out[key] = ( │ │ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │ │ 591 │ │ │ │ │ patterns_for_key, │ │ 592 │ │ │ │ │ base_path=base_path, │ │ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │ │ │ │ /x/datasets/dat │ │ a_files.py:558 in from_local_or_remote │ │ │ │ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │ │ 556 │ ) -> "DataFilesList": │ │ 557 │ │ base_path = base_path if base_path is not None else str(Path() │ │ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │ │ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │ │ 560 │ │ return cls(data_files, origin_metadata) │ │ 561 │ │ │ │ /x/datasets/dat │ │ a_files.py:195 in resolve_patterns_locally_or_by_urls │ │ │ │ 192 │ │ if is_remote_url(pattern): │ │ 193 │ │ │ data_files.append(Url(pattern)) │ │ 194 │ │ else: │ │ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │ │ 196 │ │ │ │ data_files.append(path) │ │ 197 │ │ │ 198 │ if not data_files: │ │ │ │ /x/datasets/dat │ │ a_files.py:145 in _resolve_single_pattern_locally │ │ │ │ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │ │ 143 │ │ if allowed_extensions is not None: │ │ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │ │ ❱ 145 │ │ raise FileNotFoundError(error_msg) │ │ 146 │ return sorted(out) │ │ 147 ### Steps to reproduce the bug 1. Version=2.3.2 2. In shell, cd workdir.(cd /a/b/c/.d/) 3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) ### Expected behavior fix it please~ ### Environment info 2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6072/comments
https://api.github.com/repos/huggingface/datasets/issues/6072/events
https://github.com/huggingface/datasets/pull/6072
1,822,123,560
PR_kwDODunzps5WbWFN
6,072
Fix fsspec storage_options from load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6072). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-07-26T10:44:23
2023-07-26T19:26:48
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6072", "html_url": "https://github.com/huggingface/datasets/pull/6072", "diff_url": "https://github.com/huggingface/datasets/pull/6072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6072.patch", "merged_at": null }
close https://github.com/huggingface/datasets/issues/6071
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6072/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
https://api.github.com/repos/huggingface/datasets/issues/6071/events
https://github.com/huggingface/datasets/issues/6071
1,821,990,749
I_kwDODunzps5smV9d
6,071
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?", "Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a...
2023-07-26T09:37:20
2023-07-26T11:04:35
null
NONE
null
null
null
### Describe the bug Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set. I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()` ### Steps to reproduce the bug ```python import fsspec import pandas as pd import datasets # Generate mock parquet file data_files = "demo.parquet" pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files) _storage_options = {"x": 1, "y": 2} fs = fsspec.filesystem("file", **_storage_options) dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options ) ``` Looking at the `storage_options` resolved here: https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331 they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339 the call will fail if the user-provided `storage_options` were needed. --- A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly: ```python dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options, download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}), ) ``` ### Expected behavior `storage_options` provided to `load_dataset` take effect in all backend filesystem operations. ### Environment info datasets==2.14.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-07-25T17:48:37
2023-07-25T18:19:01
2023-07-25T18:10:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070", "html_url": "https://github.com/huggingface/datasets/pull/6070", "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "merged_at": "2023-07-25T18:10:16" }
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6069/comments
https://api.github.com/repos/huggingface/datasets/issues/6069/events
https://github.com/huggingface/datasets/issues/6069
1,820,831,535
I_kwDODunzps5sh68v
6,069
KeyError: dataset has no key "image"
{ "login": "etetteh", "id": 28512232, "node_id": "MDQ6VXNlcjI4NTEyMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/etetteh", "html_url": "https://github.com/etetteh", "followers_url": "https://api.github.com/users/etetteh/followers", "following_url": "https://api.github.com/users/etetteh/following{/other_user}", "gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/etetteh/subscriptions", "organizations_url": "https://api.github.com/users/etetteh/orgs", "repos_url": "https://api.github.com/users/etetteh/repos", "events_url": "https://api.github.com/users/etetteh/events{/privacy}", "received_events_url": "https://api.github.com/users/etetteh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n", "This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_a...
2023-07-25T17:45:50
2023-07-26T17:33:49
null
NONE
null
null
null
### Describe the bug I've loaded a local image dataset with: `ds = laod_dataset("imagefolder", data_dir=path-to-data)` And defined a transform to process the data, following the Datasets docs. However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function. For some reason, the images are not in the example batches. ### Steps to reproduce the bug I'm using the latest stable version of datasets ### Expected behavior I expect the example_batches to contain both images and labels ### Environment info I'm using the latest stable version of datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6069/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6068/comments
https://api.github.com/repos/huggingface/datasets/issues/6068/events
https://github.com/huggingface/datasets/pull/6068
1,820,106,952
PR_kwDODunzps5WUkZi
6,068
fix tqdm lock deletion
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-07-25T11:17:25
2023-07-25T15:29:39
2023-07-25T15:17:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6068", "html_url": "https://github.com/huggingface/datasets/pull/6068", "diff_url": "https://github.com/huggingface/datasets/pull/6068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6068.patch", "merged_at": "2023-07-25T15:17:50" }
related to https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6068/timeline
null
null
true
End of preview. Expand in Data Studio

Dataset Card for "github-issues"

More Information needed

Downloads last month
4