id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,164,406,008
3,880
Change the framework switches to the new syntax
This PR updates the syntax of the framework-specific code samples. With this new syntax, you'll be able to: - have paragraphs of text be framework-specific instead of just code samples - have support for Flax code samples if you want. This should be merged after https://github.com/huggingface/doc-builder/pull/63 and https://github.com/huggingface/doc-builder/pull/130
closed
https://github.com/huggingface/datasets/pull/3880
2022-03-09T20:29:10
2022-03-15T14:13:28
2022-03-15T14:13:27
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
true
[]
1,164,311,612
3,879
SQuAD v2 metric: create README.md
Proposing SQuAD v2 metric card
closed
https://github.com/huggingface/datasets/pull/3879
2022-03-09T18:47:56
2022-03-10T16:48:59
2022-03-10T16:48:59
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,164,305,335
3,878
Update cats_vs_dogs size
It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu).
closed
https://github.com/huggingface/datasets/pull/3878
2022-03-09T18:40:56
2022-09-30T08:47:43
2022-03-10T14:21:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,164,146,311
3,877
Align metadata to DCAT/DCAT-AP
**Is your feature request related to a problem? Please describe.** Align to DCAT metadata to describe datasets **Describe the solution you'd like** Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant **Describe alternatives you've considered** **Additional context** DCAT is a W3C standard extended in Europe with DCAT-AP, an example is data.europa.eu publishing datasets metadata in DCAT-AP
open
https://github.com/huggingface/datasets/issues/3877
2022-03-09T16:12:25
2022-03-09T16:33:42
null
{ "login": "EmidioStani", "id": 278367, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,164,045,075
3,876
Fix download_mode in dataset_module_factory
Fix `download_mode` value set in `dataset_module_factory`. Before the fix, it was set to `bool` (default to `False`). Also set properly its default value in all public functions.
closed
https://github.com/huggingface/datasets/pull/3876
2022-03-09T14:54:33
2022-03-10T08:47:00
2022-03-10T08:46:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,164,029,673
3,875
Module namespace cleanup for v2.0
This is an attempt to make the user-facing `datasets`' submodule namespace cleaner: In particular, this PR does the following: * removes the unused `zip_nested` and `flatten_nest_dict` and their accompanying tests * removes `pyarrow` from the top-level namespace * properly uses `__all__` and the `from <module> import *` syntax to avoid importing the `<module>`'s submodules * cleans up the `utils` namespace * moves the `temp_seed` context manage from `datasets/utils/file_utils.py` to `datasets/utils/py_utils.py`
closed
https://github.com/huggingface/datasets/pull/3875
2022-03-09T14:43:07
2022-03-11T15:42:06
2022-03-11T15:42:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,164,013,511
3,874
add MSE and MAE metrics - V2
Created a new pull request to resolve unrelated changes in PR caused due to rebasing. Ref Older PR : [#3845](https://github.com/huggingface/datasets/pull/3845) Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
closed
https://github.com/huggingface/datasets/pull/3874
2022-03-09T14:30:16
2022-03-09T17:20:42
2022-03-09T17:18:20
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,163,961,578
3,873
Create SQuAD metric README.md
Proposal for a metrics card structure (with an example based on the SQuAD metric). @thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!).
closed
https://github.com/huggingface/datasets/pull/3873
2022-03-09T13:47:08
2022-03-10T16:45:57
2022-03-10T16:45:57
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,163,853,026
3,872
HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub api.upload_file( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file raise err File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file r.raise_for_status() File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet ``` Can anyone help me to resolve this issue.
closed
https://github.com/huggingface/datasets/issues/3872
2022-03-09T12:03:37
2022-03-15T16:19:50
2022-03-15T16:19:50
{ "login": "illiyas-sha", "id": 83509215, "type": "User" }
[]
false
[]
1,163,714,113
3,871
add pandas to env command
Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command.
closed
https://github.com/huggingface/datasets/pull/3871
2022-03-09T09:48:51
2022-03-09T11:21:38
2022-03-09T11:21:37
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,163,633,239
3,870
Add wikitablequestions dataset
null
closed
https://github.com/huggingface/datasets/pull/3870
2022-03-09T08:27:43
2022-03-14T11:19:24
2022-03-14T11:16:19
{ "login": "SivilTaram", "id": 10275209, "type": "User" }
[]
true
[]
1,163,434,800
3,869
Making the Hub the place for datasets in Portuguese
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth integrating into the Hugging Face hub? Special thanks to @augusnunes for his collaboration on identifying the first ones: - [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). cc @osanseviero
open
https://github.com/huggingface/datasets/issues/3869
2022-03-09T03:06:18
2022-03-09T09:04:09
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,162,914,114
3,868
Ignore duplicate keys if `ignore_verifications=True`
Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`.
closed
https://github.com/huggingface/datasets/pull/3868
2022-03-08T17:14:56
2022-03-09T13:50:45
2022-03-09T13:50:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,162,896,605
3,867
Update for the rename doc-builder -> hf-doc-utils
This PR adapts the job to the upcoming change of name of `doc-builder`.
closed
https://github.com/huggingface/datasets/pull/3867
2022-03-08T16:58:25
2023-09-24T09:54:44
2022-03-08T17:30:45
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
true
[]
1,162,833,848
3,866
Bring back imgs so that forsk dont get broken
null
closed
https://github.com/huggingface/datasets/pull/3866
2022-03-08T16:01:31
2022-03-08T17:37:02
2022-03-08T17:37:01
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,162,821,908
3,865
Add logo img
null
closed
https://github.com/huggingface/datasets/pull/3865
2022-03-08T15:50:59
2023-09-24T09:54:31
2022-03-08T16:01:59
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,162,804,942
3,864
Update image dataset tags
Align the existing image datasets' tags with new tags introduced in #3800.
closed
https://github.com/huggingface/datasets/pull/3864
2022-03-08T15:36:32
2022-03-08T17:04:47
2022-03-08T17:04:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,162,802,857
3,863
Update code blocks
Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx
closed
https://github.com/huggingface/datasets/pull/3863
2022-03-08T15:34:43
2022-03-09T16:45:30
2022-03-09T16:45:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,162,753,733
3,862
Manipulate columns on IterableDataset (rename columns, cast, etc.)
I added: - add_column - cast - rename_column - rename_columns related to https://github.com/huggingface/datasets/issues/3444 TODO: - [x] docs - [x] tests
closed
https://github.com/huggingface/datasets/pull/3862
2022-03-08T14:53:57
2022-03-10T16:40:22
2022-03-10T16:40:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,162,702,044
3,861
big_patent cased version
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
closed
https://github.com/huggingface/datasets/issues/3861
2022-03-08T14:08:55
2023-04-21T14:32:03
2023-04-21T14:32:03
{ "login": "slvcsl", "id": 25265140, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,162,623,329
3,860
Small doc fixes
null
closed
https://github.com/huggingface/datasets/pull/3860
2022-03-08T12:55:39
2022-03-08T17:37:13
2022-03-08T17:37:13
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,162,559,333
3,859
Unable to dowload big_patent (FileNotFoundError)
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent call last) [<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset ----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") 8 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1705 ignore_verifications=ignore_verifications, 1706 try_from_hf_gcs=try_from_hf_gcs, -> 1707 use_auth_token=use_auth_token, 1708 ) 1709 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 if not downloaded_from_gcs: 594 self._download_and_prepare( --> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 596 ) 597 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification [/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 123 split_types = ["train", "val", "test"] 124 extract_paths = dl_manager.extract( --> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types} 126 ) 127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types} [/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc) 282 download_config.extract_compressed_file = True 283 extracted_paths = map_nested( --> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False 285 ) 286 path_or_paths = NestedDataStructure(path_or_paths) [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args) 194 # Singleton first to spare some computation 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 196 return function(data_struct) 197 198 # Reduce logging to keep things readable in multiprocessing with tqdm [/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs) 314 elif is_local_path(url_or_filename): 315 # File, but it doesn't exist. --> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 317 else: 318 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist I have tried this in a number of machines, including on Colab, so I think this is not environment dependent. How do I load the bigPatent dataset?
closed
https://github.com/huggingface/datasets/issues/3859
2022-03-08T11:47:12
2022-03-08T13:04:09
2022-03-08T13:04:04
{ "login": "slvcsl", "id": 25265140, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,162,526,688
3,858
Udpate index.mdx margins
null
closed
https://github.com/huggingface/datasets/pull/3858
2022-03-08T11:11:52
2022-03-08T12:57:57
2022-03-08T12:57:56
{ "login": "gary149", "id": 3841370, "type": "User" }
[]
true
[]
1,162,525,353
3,857
Order of dataset changes due to glob.glob.
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken): https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
open
https://github.com/huggingface/datasets/issues/3857
2022-03-08T11:10:30
2022-03-14T11:08:22
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,162,522,034
3,856
Fix push_to_hub with null images
This code currently raises an error because of the null image: ```python import datasets dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] } features = datasets.Features({ 'name': datasets.Value('string'), 'image': datasets.Image(), }) dataset = datasets.Dataset.from_dict(dataset_dict, features) dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable ``` I fixed this in this PR TODO: - [x] add a test
closed
https://github.com/huggingface/datasets/pull/3856
2022-03-08T11:07:09
2022-03-08T15:22:17
2022-03-08T15:22:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,162,448,589
3,855
Bad error message when loading private dataset
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command then fails with: ```bash FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub ``` **even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org. We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO. ## Steps to reproduce the bug E.g. execute the following code to see the different error messages between `transformes` and `datasets`. 1. Transformers ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` The error message is clearer here - it gives: ``` OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` Let's maybe do the same for datasets? The PR was introduced to `transformers` here: https://github.com/huggingface/transformers/pull/15261 ## Expected results Better error message ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3855
2022-03-08T09:55:17
2022-07-11T15:06:40
2022-07-11T15:06:40
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,162,434,199
3,854
load only England English dataset from common voice english dataset
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
closed
https://github.com/huggingface/datasets/issues/3854
2022-03-08T09:40:52
2024-03-23T12:40:58
2022-03-09T08:13:33
{ "login": "amanjaiswal777", "id": 36677001, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
1,162,386,592
3,853
add ontonotes_conll dataset
# Introduction of the dataset OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task , includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling. In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency. # Some workarounds I did 1. task ids I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea. 2. `dl_manage.extract` Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data. # Help Don't know how to fix the building doc error.
closed
https://github.com/huggingface/datasets/pull/3853
2022-03-08T08:53:42
2022-03-15T10:48:02
2022-03-15T10:48:02
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
1,162,252,337
3,852
Redundant add dataset information and dead link.
> Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation. The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?".
closed
https://github.com/huggingface/datasets/pull/3852
2022-03-08T05:57:05
2022-03-08T16:54:36
2022-03-08T16:54:36
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,162,137,998
3,851
Load audio dataset error
## Load audio dataset error Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb, ``` from datasets import load_dataset, load_metric, Audio raw_datasets = load_dataset("superb", "ks", split="train") print(raw_datasets[0]["audio"]) ``` following errors occur ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-169-3f8253239fa0> in <module> ----> 1 raw_datasets[0]["audio"] /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1925 return self._getitem( -> 1926 key, 1927 ) 1928 /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1910 formatted_output = format_table( -> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1912 ) 1913 return formatted_output /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row 314 /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row) 219 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row 222 223 def decode_column(self, column: list, column_name: str) -> list: /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example) 1320 else value 1321 for column_name, (feature, value) in utils.zip_dict( -> 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) 1324 } /usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0) 1319 if self._column_requires_decoding[column_name] 1320 else value -> 1321 for column_name, (feature, value) in utils.zip_dict( 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj) 1053 # Object with special decoding: 1054 elif isinstance(schema, (Audio, Image)): -> 1055 return schema.decode_example(obj) if obj is not None else None 1056 return obj 1057 /usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 100 array, sampling_rate = self._decode_non_mp3_file_like(file) 101 else: --> 102 array, sampling_rate = self._decode_non_mp3_path_like(path) 103 return {"path": path, "array": array, "sampling_rate": sampling_rate} 104 /usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path) 143 144 with xopen(path, "rb") as f: --> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 146 return array, sampling_rate 147 /usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type) 110 111 y = [] --> 112 with audioread.audio_open(os.path.realpath(path)) as input_file: 113 sr_native = input_file.samplerate 114 n_channels = input_file.channels /usr/lib/python3.6/posixpath.py in realpath(filename) 392 """Return the canonical path of the specified filename, eliminating any 393 symbolic links encountered in the path.""" --> 394 filename = os.fspath(filename) 395 path, ok = _joinrealpath(filename[:0], filename, {}) 396 return abspath(path) TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader ``` ## Expected results ``` >>> raw_datasets[0]["audio"] {'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347, 0.01623535, 0.01724243]), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav', 'sampling_rate': 16000} ```
closed
https://github.com/huggingface/datasets/issues/3851
2022-03-08T02:16:04
2022-09-27T12:13:55
2022-03-08T11:20:06
{ "login": "lemoner20", "id": 31890987, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,162,126,030
3,850
[feat] Add tqdm arguments
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
closed
https://github.com/huggingface/datasets/pull/3850
2022-03-08T01:53:25
2022-12-16T05:34:07
2022-12-16T05:34:07
{ "login": "penguinwang96825", "id": 28087825, "type": "User" }
[]
true
[]
1,162,091,075
3,849
Add "Adversarial GLUE" dataset to datasets library
Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/ ```python >>> import datasets >>> >>> datasets.load_dataset('adv_glue') Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset builder_instance = load_dataset_builder( File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__ super().__init__(*args, **kwargs) File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__ self.config, self.config_id = self._create_builder_config( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config raise ValueError( ValueError: Config name is missing. Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte'] Example of usage: `load_dataset('adv_glue', 'adv_sst2')` >>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0] Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.11it/s] {'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0} ```
closed
https://github.com/huggingface/datasets/pull/3849
2022-03-08T00:47:11
2022-03-28T11:17:14
2022-03-28T11:12:04
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[]
true
[]
1,162,076,902
3,848
NonMatchingChecksumError when checksum is None
I ran into the following error when adding a new dataset: ```bash expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}} recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}} verification_name = 'dataset source files' def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None): if expected_checksums is None: logger.info("Unable to verify checksums.") return if len(set(expected_checksums) - set(recorded_checksums)) > 0: raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) if len(set(recorded_checksums) - set(expected_checksums)) > 0: raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] for_verification_name = " for " + verification_name if verification_name is not None else "" if len(bad_urls) > 0: error_msg = "Checksums didn't match" + for_verification_name + ":\n" > raise NonMatchingChecksumError(error_msg + str(bad_urls)) E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: E ['https://adversarialglue.github.io/dataset/dev.zip'] src/datasets/utils/info_utils.py:40: NonMatchingChecksumError ``` ## Expected results The dataset downloads correctly, and there is no error. ## Actual results Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug.
closed
https://github.com/huggingface/datasets/issues/3848
2022-03-08T00:24:12
2022-03-15T14:37:26
2022-03-15T12:28:23
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,161,856,417
3,847
Datasets' cache not re-used
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example. ```python from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") # tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("roberta-base") text_column_name = "text" column_names = raw_datasets["train"].column_names def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on every text in dataset", ) ``` ## Expected results No tokenization would be required after the 1st run. Everything should be loaded from the cache. ## Actual results Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 18.04.6 LTS - Python version: 3.6.9 - PyArrow version: 6.0.1
open
https://github.com/huggingface/datasets/issues/3847
2022-03-07T19:55:15
2025-05-19T11:58:55
null
{ "login": "gejinchen", "id": 15106980, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,161,810,226
3,846
Update faiss device docstring
Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset`
closed
https://github.com/huggingface/datasets/pull/3846
2022-03-07T19:06:59
2022-03-07T19:21:23
2022-03-07T19:21:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,161,739,483
3,845
add RMSE and MAE metrics.
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Please suggest any changes if required. Thank you.
closed
https://github.com/huggingface/datasets/pull/3845
2022-03-07T17:53:24
2022-03-09T16:50:03
2022-03-09T16:50:03
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,161,686,754
3,844
Add rmse and mae metrics.
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Any suggestions and changes required will be helpful.
closed
https://github.com/huggingface/datasets/pull/3844
2022-03-07T17:06:38
2022-03-07T17:24:32
2022-03-07T17:15:06
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,161,397,812
3,843
Fix Google Drive URL to avoid Virus scan warning in streaming mode
The streaming version of https://github.com/huggingface/datasets/pull/3787. Fix #3835 CC: @albertvillanova
closed
https://github.com/huggingface/datasets/pull/3843
2022-03-07T13:09:19
2022-03-15T12:30:25
2022-03-15T12:30:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,161,336,483
3,842
Align IterableDataset.shuffle with Dataset.shuffle
From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode). Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead. In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument.
closed
https://github.com/huggingface/datasets/pull/3842
2022-03-07T12:10:46
2022-03-07T19:03:43
2022-03-07T19:03:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,161,203,842
3,841
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
## Describe the bug Pyright complains about module not exported. ## Steps to reproduce the bug Use an editor/IDE with Pyright Language server with default configuration: ```python from datasets import load_dataset ``` ## Expected results No complain from Pyright ## Actual results Pyright complain below: ``` `load_dataset` is not exported from module "datasets" Import from "datasets.load" instead [reportPrivateImportUsage] ``` Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation. ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3841
2022-03-07T10:24:04
2023-02-18T19:14:03
2023-02-13T13:48:41
{ "login": "lkhphuc", "id": 12573521, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,161,183,773
3,840
Pin responses to fix CI for Windows
Temporarily fix CI for Windows by pinning `responses`. See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 Fix: #3839
closed
https://github.com/huggingface/datasets/pull/3840
2022-03-07T10:06:53
2022-03-07T10:12:36
2022-03-07T10:07:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,161,183,482
3,839
CI is broken for Windows
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split = 'test' text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt' tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7') @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} > dataset = TextDatasetReader(path, cache_dir=cache_dir).read() tests\io\test_text.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read use_auth_token=use_auth_token, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare self._download_prepared_from_hf_gcs(dl_manager.download_config) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs reader.download_from_hf_gcs(download_config, relative_data_dir) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/")) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path download_desc=download_config.download_desc, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache headers=headers, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head max_retries=max_retries, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request return session.request(method=method, url=url, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request resp = self.send(prep, **send_kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send r = adapter.send(request, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send return self._on_request(adapter, request, *a, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request match, match_failed_reasons = self._find_match(request) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <responses.RequestsMock object at 0x000002048AD70588> request = <PreparedRequest [HEAD]> def _find_first_match(self, request): match_failed_reasons = [] > for i, match in enumerate(self._matches): E AttributeError: 'RequestsMock' object has no attribute '_matches' C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError ```
closed
https://github.com/huggingface/datasets/issues/3839
2022-03-07T10:06:42
2022-05-20T14:13:43
2022-03-07T10:07:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,161,137,406
3,838
Add a data type for labeled images (image segmentation)
It might be a mix of Image and ClassLabel, and the color palette might be generated automatically. --- ### Example every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color. So we might want to render the image as a colored image instead of a black and white one. <img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png"> See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow
open
https://github.com/huggingface/datasets/issues/3838
2022-03-07T09:38:15
2024-05-29T16:50:55
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,161,109,031
3,837
Release: 1.18.4
null
closed
https://github.com/huggingface/datasets/pull/3837
2022-03-07T09:13:29
2022-03-07T11:07:35
2022-03-07T11:07:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,161,072,531
3,836
Logo float left
<img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
closed
https://github.com/huggingface/datasets/pull/3836
2022-03-07T08:38:34
2022-03-07T20:21:11
2022-03-07T09:14:11
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,161,029,205
3,835
The link given on the gigaword does not work
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3835
2022-03-07T07:56:42
2022-03-15T12:30:23
2022-03-15T12:30:23
{ "login": "martin6336", "id": 26357784, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,160,657,937
3,834
Fix dead dataset scripts creation link.
Previous link gives 404 error. Updated with a new dataset scripts creation link.
closed
https://github.com/huggingface/datasets/pull/3834
2022-03-06T16:45:48
2022-03-07T12:12:07
2022-03-07T12:12:07
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,160,543,713
3,833
Small typos in How-to-train tutorial.
null
closed
https://github.com/huggingface/datasets/pull/3833
2022-03-06T07:49:49
2022-03-07T12:35:33
2022-03-07T12:13:17
{ "login": "lkhphuc", "id": 12573521, "type": "User" }
[]
true
[]
1,160,503,446
3,832
Making Hugging Face the place to go for Graph NNs datasets
Let's make Hugging Face Datasets the central hub for GNN datasets :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field. What are some datasets worth integrating into the Hugging Face hub? Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Special thanks to @napoles-uach for his collaboration on identifying the first ones: - [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb). - [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns). - [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression) cc @osanseviero
open
https://github.com/huggingface/datasets/issues/3832
2022-03-06T03:02:58
2022-03-14T07:45:38
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "graph", "color": "7AFCAA" } ]
false
[]
1,160,501,000
3,831
when using to_tf_dataset with shuffle is true, not all completed batches are made
## Describe the bug when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch ## Steps to reproduce the bug this is the sample code below https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing ## Expected results regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16. ## Actual results 4 batches ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3831
2022-03-06T02:43:50
2022-03-08T15:18:56
2022-03-08T15:18:56
{ "login": "greenned", "id": 42107709, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,160,181,404
3,830
Got error when load cnn_dailymail dataset
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below: - windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories' - google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' The code is to load dataset: windows os: ``` from datasets import load_dataset dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data") ``` google colab: ``` import datasets train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ```
closed
https://github.com/huggingface/datasets/issues/3830
2022-03-05T01:43:12
2022-03-07T06:53:41
2022-03-07T06:53:41
{ "login": "wgong0510", "id": 78331051, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,160,154,352
3,829
[📄 Docs] Create a `datasets` performance guide.
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments. ## Feature Request Could we create a performance guide for using `datasets`, similar to: * [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735) * [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis) This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below). ![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png) ## Related Issues * [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670) * [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499) * [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004) * [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830) * [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315) * [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911)
open
https://github.com/huggingface/datasets/issues/3829
2022-03-05T00:28:06
2022-03-10T16:24:27
null
{ "login": "dynamicwebpaige", "id": 3712347, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,160,064,029
3,828
The Pile's _FEATURE spec seems to be incorrect
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a dict with an id field inside. ## Steps to reproduce the bug ## Expected results Feature spec should match the data I'd think? ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/3828
2022-03-04T21:25:32
2022-03-08T09:30:49
2022-03-08T09:30:48
{ "login": "dlwh", "id": 9633, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,159,878,436
3,827
Remove deprecated `remove_columns` param in `filter`
A leftover from #3803.
closed
https://github.com/huggingface/datasets/pull/3827
2022-03-04T17:23:26
2022-03-07T12:37:52
2022-03-07T12:37:51
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,159,851,110
3,826
Add IterableDataset.filter
_Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_ I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`: ```python def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None): ``` TODO: - [x] tests - [x] docs related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753
closed
https://github.com/huggingface/datasets/pull/3826
2022-03-04T16:57:23
2022-03-09T17:23:13
2022-03-09T17:23:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,159,802,345
3,825
Update version and date in Wikipedia dataset
CC: @geohci
closed
https://github.com/huggingface/datasets/pull/3825
2022-03-04T16:05:27
2022-03-04T17:24:37
2022-03-04T17:24:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,159,574,186
3,824
Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute`
Fix #3818
closed
https://github.com/huggingface/datasets/pull/3824
2022-03-04T12:04:40
2022-03-04T18:04:22
2022-03-04T18:04:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,159,497,844
3,823
500 internal server error when trying to open a dataset composed of Zarr stores
## Describe the bug The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code. The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine. In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format? For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("openclimatefix/mrms") ``` ## Expected results The dataset should be downloaded or open up ## Actual results A 500 internal server error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35 - Python version: 3.9.10 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3823
2022-03-04T10:37:14
2022-03-08T09:47:39
2022-03-08T09:47:39
{ "login": "jacobbieker", "id": 7170359, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,159,395,728
3,822
Add Biwi Kinect Head Pose Database
## Adding a Dataset - **Name:** Biwi Kinect Head Pose Database - **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles. - **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html) - **Motivation:** Useful pose estimation dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/3822
2022-03-04T08:48:39
2025-04-07T13:04:25
2022-06-01T13:00:47
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,159,371,927
3,821
Update Wikipedia dataset
This PR combines all updates to Wikipedia dataset. Once approved, this will be used to generate the pre-processed Wikipedia datasets. Finally, this PR will be able to be merged into master: - NOT using squash - BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved TODO: - [x] #3435 - [x] #3789 - [x] #3825 - [x] Run to get the pre-processed data for big languages (backward compatibility) - [x] #3958 CC: @geohci
closed
https://github.com/huggingface/datasets/pull/3821
2022-03-04T08:19:21
2022-03-21T12:35:23
2022-03-21T12:31:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,159,106,603
3,820
`pubmed_qa` checksum mismatch
## Describe the bug Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets try: datasets.load_dataset("pubmed_qa", "pqa_labeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_unlabeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_artificial") except Exception as e: print(e) ``` ## Expected results Successful download. ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: macOS - Python version: 3.8.1 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3820
2022-03-04T00:28:08
2022-03-04T09:42:32
2022-03-04T09:42:32
{ "login": "jon-tow", "id": 41410219, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,158,848,288
3,819
Fix typo in doc build yml
cc: @lhoestq
closed
https://github.com/huggingface/datasets/pull/3819
2022-03-03T20:08:44
2022-03-04T13:07:41
2022-03-04T13:07:41
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,158,788,545
3,818
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input. For example, when the `add_batch` method is used, then the `compute()` method fails: ``` metric = load_metric("sari") metric.add_batch( predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) metric.compute() > TypeError: _compute() missing 1 required positional argument: 'sources' ``` Therefore, the `compute() `method can only be used standalone: ``` metric = load_metric("sari") result = metric.compute( sources=["About 95 species are currently accepted ."], predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) > {'sari': 26.953601953601954} ``` **Describe the solution you'd like** Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class. ``` add_batch(*, sources=None, predictions=None, references=None, **kwargs) add(*, sources=None, predictions=None, references=None, **kwargs) compute() ``` **Describe alternatives you've considered** I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch). **Additional context** These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
closed
https://github.com/huggingface/datasets/issues/3818
2022-03-03T18:57:54
2022-03-04T18:04:21
2022-03-04T18:04:21
{ "login": "lmvasque", "id": 6901031, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,158,592,335
3,817
Simplify Common Voice code
In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming. In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-streaming mode. cc @patrickvonplaten @polinaeterna @anton-l @albertvillanova since this will become the template for many audio datasets to come. This change can also trivially be applied to the other audio datasets that already exist. Using this line, you can get access to local files in non-streaming mode: ```python local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None ```
closed
https://github.com/huggingface/datasets/pull/3817
2022-03-03T16:01:21
2022-03-04T14:51:48
2022-03-04T12:39:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,158,589,913
3,816
Doc new UI test workflows2
null
closed
https://github.com/huggingface/datasets/pull/3816
2022-03-03T15:59:14
2022-10-04T09:35:53
2022-03-03T16:42:15
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,158,589,512
3,815
Fix iter_archive getting reset
The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits. To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator. The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py`
closed
https://github.com/huggingface/datasets/pull/3815
2022-03-03T15:58:52
2022-03-03T18:06:37
2022-03-03T18:06:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,158,518,995
3,814
Handle Nones in PyArrow struct
This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
closed
https://github.com/huggingface/datasets/pull/3814
2022-03-03T15:03:35
2022-03-03T16:37:44
2022-03-03T16:37:43
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,158,474,859
3,813
Add MetaShift dataset
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/3813
2022-03-03T14:26:45
2022-04-10T13:39:59
2022-04-10T13:39:59
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,158,369,995
3,812
benchmark streaming speed with tar vs zip archives
# do not merge ## Hypothesis packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar) ## Data I host it [here](https://huggingface.co/datasets/polinaeterna/benchmark_dataset/) ## I checked three configurations: 1. All data in one zip archive, streaming only those files that exist in split metadata file (we can access them directrly with no need to iterate over full archive), see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR196) 2. All data in three splits, the standart way to make streaming efficient, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR174) 3. All data in single tar, iterate over the full archive and take only files existing in split metadata file, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR150) ## Results 1. one zip ![image](https://user-images.githubusercontent.com/16348744/156567611-e3652087-7147-4cf0-9047-9cbc00ec71f5.png) 2. three tars ![image](https://user-images.githubusercontent.com/16348744/156567688-2a462107-f83e-4722-8ea3-71a13b56c998.png) 3. one tar ![image](https://user-images.githubusercontent.com/16348744/156567772-1bceb5f7-e7d9-4fa3-b31b-17fec5f9a5a7.png) didn't check on the full data as it's time consuming but anyway it's pretty obvious that one-zip-way is not a good idea. here it's even worse than full iteration over tar containing all three splits (but that would depend on the case).
closed
https://github.com/huggingface/datasets/pull/3812
2022-03-03T12:48:41
2022-03-03T14:55:34
2022-03-03T14:55:33
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,158,234,407
3,811
Update dev doc gh workflows
Reflect changes from https://github.com/huggingface/transformers/pull/15891
closed
https://github.com/huggingface/datasets/pull/3811
2022-03-03T10:29:01
2022-10-04T09:35:54
2022-03-03T10:45:54
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,158,202,093
3,810
Update version of xcopa dataset
Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases We updated our loading script, but we did not bump a new version number: - #3254 This PR updates our loading script version from `1.0.0` to `1.1.0`.
closed
https://github.com/huggingface/datasets/pull/3810
2022-03-03T09:58:25
2022-03-03T10:44:30
2022-03-03T10:44:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,158,143,480
3,809
Checksums didn't match for datasets on Google Drive
## Describe the bug Datasets hosted on Google Drive do not seem to work right now. Loading them fails with a checksum error. ## Steps to reproduce the bug ```python from datasets import load_dataset for dataset in ["head_qa", "yelp_review_full"]: try: load_dataset(dataset) except Exception as exception: print("Error", dataset, exception) ``` Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4). ## Expected results The datasets should be loaded. ## Actual results ``` Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Error head_qa Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43... Error yelp_review_full Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0'] ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3809
2022-03-03T09:01:10
2022-03-03T09:24:58
2022-03-03T09:24:05
{ "login": "muelletm", "id": 11507045, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,157,650,043
3,808
Pre-Processing Cache Fails when using a Factory pattern
## Describe the bug If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time. ## Steps to reproduce the bug ```python def preprocess_function_factory(augmentation=None): def preprocess_function(examples): # Tokenize the texts if augmentation: conversions1 = [ augmentation(example) for example in examples[sentence1_key] ] if sentence2_key is None: args = (conversions1,) else: conversions2 = [ augmentation(example) for example in examples[sentence2_key] ] args = (conversions1, conversions2) else: args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer( *args, padding=padding, max_length=max_seq_length, truncation=True ) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [ (label_to_id[l] if l != -1 else -1) for l in examples["label"] ] return result return preprocess_function capitalize = lambda x: x.capitalize() preprocess_function = preprocess_function_factory(augmentation=capitalize) print(hash(preprocess_function)) # This will change on each run raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=True, desc="Running transformation and tokenizer on dataset", ) ``` ## Expected results Running the code twice will cause the cache to be re-used. ## Actual results Running the code twice causes the whole dataset to be re-processed
closed
https://github.com/huggingface/datasets/issues/3808
2022-03-02T20:18:43
2022-03-10T23:01:47
2022-03-10T23:01:47
{ "login": "Helw150", "id": 9847335, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,157,531,812
3,807
NonMatchingChecksumError in xcopa dataset
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails with: ```python in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/cambridgeltl/xcopa/archive/master.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3, and 1.18.4.dev0 - Platform: - Python version: 3.8 - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/3807
2022-03-02T18:10:19
2022-05-20T06:00:42
2022-03-03T17:40:31
{ "login": "afcruzs-ms", "id": 93286455, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,157,505,826
3,806
Fix Spanish data file URL in wiki_lingua dataset
This PR fixes the URL for Spanish data file. Previously, Spanish had the same URL as Vietnamese data file.
closed
https://github.com/huggingface/datasets/pull/3806
2022-03-02T17:43:42
2022-03-03T08:38:17
2022-03-03T08:38:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,157,454,884
3,805
Remove decode: true for image feature in head_qa
This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it.
closed
https://github.com/huggingface/datasets/pull/3805
2022-03-02T16:58:34
2022-03-07T12:13:36
2022-03-07T12:13:35
{ "login": "craffel", "id": 417568, "type": "User" }
[]
true
[]
1,157,297,278
3,804
Text builder with custom separator line boundaries
**Is your feature request related to a problem? Please describe.** The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted. **Describe the solution you'd like** ```python if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line if self.config.custom_newline is None: batch = batch.splitlines(keepends=self.config.keep_linebreaks) else: batch = batch.split(self.config.custom_newline)[:-1] pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 ``` **A clear and concise description of what you want to happen.** Creating the dataset rows with a subset of the `splitlines()` line boundaries.
open
https://github.com/huggingface/datasets/issues/3804
2022-03-02T14:50:16
2022-03-16T15:53:59
null
{ "login": "cronoik", "id": 18630848, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,157,271,679
3,803
Remove deprecated methods/params (preparation for v2.0)
This PR removes the following deprecated methos/params: * `Dataset.cast_`/`DatasetDict.cast_` * `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_` * `Dataset.remove_columns_`/`DatasetDict.remove_columns_` * `Dataset.rename_columns_`/`DatasetDict.rename_columns_` * `prepare_module` * param `script_version` in `load_dataset`/`load_metric` * param `version` in `hf_github_url`
closed
https://github.com/huggingface/datasets/pull/3803
2022-03-02T14:29:12
2022-03-02T14:53:21
2022-03-02T14:53:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,157,009,964
3,802
Release of FairLex dataset
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing** We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. *Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.* Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
closed
https://github.com/huggingface/datasets/pull/3802
2022-03-02T10:40:18
2022-03-02T15:21:10
2022-03-02T15:18:54
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
1,155,649,279
3,801
[Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters
Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing. In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0** In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them. I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns ### TODO - [x] tests - [x] docs Related to https://github.com/huggingface/datasets/issues/3444
closed
https://github.com/huggingface/datasets/pull/3801
2022-03-01T18:06:43
2022-03-07T16:30:30
2022-03-07T16:30:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,155,620,761
3,800
Added computer vision tasks
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
closed
https://github.com/huggingface/datasets/pull/3800
2022-03-01T17:37:46
2022-03-04T07:15:55
2022-03-04T07:15:55
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
1,155,356,102
3,799
Xtreme-S Metrics
**Added datasets (TODO)**: - [x] MLS - [x] Covost2 - [x] Minds-14 - [x] Voxpopuli - [x] FLoRes (need data) **Metrics**: Done
closed
https://github.com/huggingface/datasets/pull/3799
2022-03-01T13:42:28
2022-03-16T14:40:29
2022-03-16T14:40:26
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,154,411,066
3,798
Fix error message in CSV loader for newer Pandas versions
Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this: ```python csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f ``` CC: @SBrandeis
closed
https://github.com/huggingface/datasets/pull/3798
2022-02-28T18:24:10
2022-02-28T18:51:39
2022-02-28T18:51:38
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,154,383,063
3,797
Reddit dataset card contribution
Description tags for webis-tldr-17 added.
closed
https://github.com/huggingface/datasets/pull/3797
2022-02-28T17:53:18
2023-03-09T22:08:58
2022-03-01T12:58:57
{ "login": "anna-kay", "id": 56791604, "type": "User" }
[]
true
[]
1,154,298,629
3,796
Skip checksum computation if `ignore_verifications` is `True`
This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance)
closed
https://github.com/huggingface/datasets/pull/3796
2022-02-28T16:28:45
2022-02-28T17:03:46
2022-02-28T17:03:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,153,261,281
3,795
can not flatten natural_questions dataset
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3795
2022-02-27T13:57:40
2022-03-21T14:36:12
2022-03-21T14:36:12
{ "login": "Hannibal046", "id": 38466901, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,153,185,343
3,794
Add Mahalanobis distance metric
Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P. In this PR I implement the metric in a simple way with the help of numpy only. Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable.
closed
https://github.com/huggingface/datasets/pull/3794
2022-02-27T10:56:31
2022-03-02T14:46:15
2022-03-02T14:46:15
{ "login": "JoaoLages", "id": 17574157, "type": "User" }
[]
true
[]
1,150,974,950
3,793
Docs new UI actions no self hosted
Removes the need to have a self-hosted runner for the dev documentation
closed
https://github.com/huggingface/datasets/pull/3793
2022-02-25T23:48:55
2022-03-01T15:55:29
2022-03-01T15:55:28
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
true
[]
1,150,812,404
3,792
Checksums didn't match for dataset source
## Dataset viewer issue for 'wiki_lingua*' **Link:** *link to the dataset viewer page* `data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]") ` *short description of the issue* ``` [NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]() ``` Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/3792
2022-02-25T19:55:09
2024-03-13T12:25:08
2022-02-28T08:44:18
{ "login": "rafikg", "id": 13174842, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,150,733,475
3,791
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`.
closed
https://github.com/huggingface/datasets/pull/3791
2022-02-25T18:26:35
2022-03-01T13:10:43
2022-03-01T13:10:42
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,150,646,899
3,790
Add doc builder scripts
I added the three scripts: - build_dev_documentation.yml - build_documentation.yml - delete_dev_documentation.yml I got them from `transformers` and did a few changes: - I removed the `transformers`-specific dependencies - I changed all the paths to be "datasets" instead of "transformers" - I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26) cc @LysandreJik @mishig25
closed
https://github.com/huggingface/datasets/pull/3790
2022-02-25T16:38:47
2022-03-01T15:55:42
2022-03-01T15:55:41
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,150,587,404
3,789
Add URL and ID fields to Wikipedia dataset
This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using. About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent: > For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed. > [[%C3%80_propos_de_M%C3%A9ta]] > is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL > [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) > while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same. Fix #3398. CC: @geohci
closed
https://github.com/huggingface/datasets/pull/3789
2022-02-25T15:34:37
2022-03-04T08:24:24
2022-03-04T08:24:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,150,375,720
3,788
Only-data dataset loaded unexpectedly as validation split
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
open
https://github.com/huggingface/datasets/issues/3788
2022-02-25T12:11:39
2022-02-28T11:22:22
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,150,235,569
3,787
Fix Google Drive URL to avoid Virus scan warning
This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs. Fix #3786, fix #3784.
closed
https://github.com/huggingface/datasets/pull/3787
2022-02-25T09:35:12
2022-03-04T20:43:32
2022-02-25T11:56:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,150,233,067
3,786
Bug downloading Virus scan warning page from Google Drive URLs
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
closed
https://github.com/huggingface/datasets/issues/3786
2022-02-25T09:32:23
2022-03-03T09:25:59
2022-02-25T11:56:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,150,069,801
3,785
Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset)
This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets. So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t Fixes #3784
closed
https://github.com/huggingface/datasets/pull/3785
2022-02-25T05:48:57
2022-03-03T16:43:47
2022-03-03T14:03:37
{ "login": "AngadSethi", "id": 58678541, "type": "User" }
[]
true
[]
1,150,057,955
3,784
Unable to Download CNN-Dailymail Dataset
## Describe the bug I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening: - The dataset sits in Google Drive, and both the CNN and DM datasets are large. - Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:** ![image](https://user-images.githubusercontent.com/58678541/155658435-c2f497d7-7601-4332-94b1-18a62dd96422.png) - **This leads to the following error**: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ``` ## Expected results That the dataset is downloaded and processed just like other datasets. ## Actual results Hit with this error: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3784
2022-02-25T05:24:47
2022-03-03T14:05:17
2022-03-03T14:05:17
{ "login": "AngadSethi", "id": 58678541, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,149,256,744
3,783
Support passing str to iter_files
null
closed
https://github.com/huggingface/datasets/pull/3783
2022-02-24T12:58:15
2022-02-24T16:01:40
2022-02-24T16:01:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,148,994,022
3,782
Error of writing with different schema, due to nonpreservation of nullability
## 1. Case ``` dataset.map( batched=True, disable_nullable=True, ) ``` will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516 `pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema` ## 2. Debugging ### 2.1 tracing During `_map_single`, the following are called https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523 https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511 ### 2.2. Observation The problem is, even after `table_cast`, `pa_table.schema != self._schema` `pa_table.schema` (before/after `table_cast`) ``` input_ids: list<item: int32> child 0, item: int32 ``` `self._schema` ``` input_ids: list<item: int32> not null child 0, item: int32 ``` ### 2.3. Reason https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121 Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability. https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103 So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema ## 3. Solution 1. Let `Features` stores nullability. 2. Directly cast table with original schema but not schema from converted `Features`. (this PR) 3. Don't `cast_table` when `write_table`
closed
https://github.com/huggingface/datasets/pull/3782
2022-02-24T08:23:07
2022-03-03T14:54:39
2022-03-03T14:54:39
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
1,148,599,680
3,781
Reddit dataset card additions
The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38 It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well. The task at which the dataset is aimed is abstractive summarization.
closed
https://github.com/huggingface/datasets/pull/3781
2022-02-23T21:29:16
2022-02-28T18:00:40
2022-02-28T11:21:14
{ "login": "anna-kay", "id": 56791604, "type": "User" }
[]
true
[]