title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir. "Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here. ## Steps to reproduce the bug ``` export HF_DATASETS_OFFLINE=1 python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2 ``` ## Expected results datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time. ## Actual results The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426". ``` 12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53 12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426) Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s] 12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min 12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s] 12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums. 12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train 12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation 12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data. 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux - Python version: 3.8.10 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3447
[ "Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case", "@lhoestq Thank you for explaining. I am sorry but I was not clear abou...
null
3,447
false
Remove redundant local path information in audio/image datasets
Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828 TODOs: * [ ] merge https://github.com/huggingface/datasets/pull/3430 * [ ] merge https://github.com/huggingface/datasets/pull/3364 * [ ] re-generate the info files of the updated audio datasets cc: @patrickvonplaten @anton-l @nateraw (I expect this to break the audio/vision examples in Transformers; after this change you'll be able to access underlying paths as follows `dset = dset.cast_column("audio", Audio(..., decode=False)); path = dset[0]["audio"]`)
https://github.com/huggingface/datasets/pull/3446
[ "Cool, I'm in favor of this PR. Our official examples in speech already make use of `\"audio\"` so no need to change anything there. It would be great if we could prominently feature how one can get the audio path without decoding in the docs.", "@patrickvonplaten Yes, I agree.\r\n\r\ncc @stevhliu we should add a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3446", "html_url": "https://github.com/huggingface/datasets/pull/3446", "diff_url": "https://github.com/huggingface/datasets/pull/3446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3446.patch", "merged_at": null }
3,446
true
question
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
https://github.com/huggingface/datasets/issues/3445
[ "Hi ! What's your question ?" ]
null
3,445
false
Align the Dataset and IterableDataset processing API
## Intro items marked like <s>this</s> are done already :) Currently the two classes have two distinct API for processing: ### The `.map()` method Both have those parameters in common: function, batched, batch_size - IterableDataset is missing those parameters: <s>with_indices</s>, with_rank, <s>input_columns</s>, <s>drop_last_batch</s>, <s>remove_columns</s>, features, disable_nullable, fn_kwargs, num_proc - Dataset also has additional parameters that are exclusive, due to caching: keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, suffix_template, new_fingerprint - <s>There is also an important difference in terms of behavior: **Dataset.map adds new columns** (with dict.update) BUT **IterableDataset discards previous columns** (it overwrites the dict) IMO the two methods should have the same behavior. This would be an important breaking change though.</s> - Dataset.map is eager while IterableDataset.map is lazy ### The `.shuffle()` method - <s>Both have an optional seed parameter, but IterableDataset requires a mandatory parameter buffer_size to control the size of the local buffer used for approximate shuffling.</s> - <s>IterableDataset is missing the parameter generator</s> - Also Dataset has exclusive parameters due to caching: keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint ### The `.with_format()` method - IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs - other methods like `set_format`, `reset_format` or `formatted_as` are also missing ### Other methods - Both have the same `remove_columns` method - IterableDataset is missing: <s>cast</s>, <s>cast_column</s>, <s>filter</s>, <s>rename_column</s>, <s>rename_columns</s>, class_encode_column, flatten, prepare_for_task, train_test_split, shard - Some other methods are missing but we can discuss them: set_transform, formatted_as, with_transform - And others don't really make sense for an iterable dataset: select, sort, add_column, add_item - Dataset is missing skip and take, that IterableDataset implements. ## Questions I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly. 1. What should be aligned and what shouldn't between those two APIs ? IMO the minimum is to align the main processing methods. It would mean aligning breaking the current `Iterable.map` to have the same behavior as `Dataset.map` (add columns with dict.update), and add multiprocessing as well as the missing parameters. DONE ✅ It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard. WIP 🟠 2. What are the breaking changes for IterableDataset ? The main breaking change would be the change of behavior of `IterableDataset.map`, because currently it discards all the previous columns instead of keeping them. DONE ✅ 3. Shall we also do some changes for regular datasets ? I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are: - keep the eager Dataset.map with caching - keep the with_transform method for lazy processing - keep Dataset.select (it could also be added to IterableDataset even though it's not recommended) We could have a completely aligned `map` method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that. For information, TFDS does lazy map by default, and has an additional `.cache()` method. ## Opinions ? I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in `transformers`, and it would create a satisfactory experience for users that want to switch from one mode to the other. cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger
https://github.com/huggingface/datasets/issues/3444
[ "Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community)....
null
3,444
false
Extend iter_archive to support file object input
This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`. With this feature, we can iterate over a tar file inside another tar file.
https://github.com/huggingface/datasets/pull/3443
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3443", "html_url": "https://github.com/huggingface/datasets/pull/3443", "diff_url": "https://github.com/huggingface/datasets/pull/3443.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3443.patch", "merged_at": "2021-12-17T17:53:02" }
3,443
true
Extend text to support yielding lines, paragraphs or documents
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
https://github.com/huggingface/datasets/pull/3442
[ "The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)", "> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3442", "html_url": "https://github.com/huggingface/datasets/pull/3442", "diff_url": "https://github.com/huggingface/datasets/pull/3442.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3442.patch", "merged_at": "2021-12-20T16:39:18" }
3,442
true
Add QuALITY dataset
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3441
[ "I'll take this one if no one hasn't yet!" ]
null
3,441
false
datasets keeps reading from cached files, although I disabled it
## Describe the bug Hi, I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings: ``` from datasets import set_caching_enabled set_caching_enabled(False) ``` also force redownlaod: ``` download_mode='force_redownload' ``` but none worked so far, this is on a cluster and on some of the machines this reads from the cached files, I really appreciate any idea on how to fully remove caching @lhoestq many thanks ``` File "run_clm.py", line 496, in <module> main() File "run_clm.py", line 419, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 943, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 172, in evaluate output = self.eval_loop( File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 241, in eval_loop metrics = self.compute_pet_metrics(eval_datasets, model, self.extra_info[metric_key_prefix], task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 268, in compute_pet_metrics centroids = self._compute_per_token_train_centroids(model, task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 353, in _compute_per_token_train_centroids data = get_label_samples(self.get_per_task_train_dataset(task), label) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 350, in get_label_samples return dataset.filter(lambda example: int(example['labels']) == label) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2519, in filter indices = self.map( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map return self._map_single( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2248, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 654, in from_file return cls( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1092, in reorder_fields_as return Features(recursive_reorder(self, other)) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1081, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'candidates_ids': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'labels': Value(dtype='int64', id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'extra_fields': {}, 'task': Value(dtype='string', id=None)} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux - Python version: 3.8.12 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3440
[ "Hi ! What version of `datasets` are you using ? Can you also provide the logs you get before it raises the error ?" ]
null
3,440
false
Add `cast_column` to `IterableDataset`
Closes #3369. cc: @patrickvonplaten
https://github.com/huggingface/datasets/pull/3439
[ "Awesome thanks a lot @mariosasko " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3439", "html_url": "https://github.com/huggingface/datasets/pull/3439", "diff_url": "https://github.com/huggingface/datasets/pull/3439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3439.patch", "merged_at": "2021-12-16T15:55:19" }
3,439
true
Update supported versions of Python in setup.py
Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated.
https://github.com/huggingface/datasets/pull/3438
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3438", "html_url": "https://github.com/huggingface/datasets/pull/3438", "diff_url": "https://github.com/huggingface/datasets/pull/3438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3438.patch", "merged_at": "2021-12-20T14:22:12" }
3,438
true
Update BLEURT hyperlink
The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not. ![Screen Shot 2021-12-15 at 17 31 27](https://user-images.githubusercontent.com/26859204/146226432-c83cbdaf-f57d-4999-b53c-85da718ff7fb.png)
https://github.com/huggingface/datasets/pull/3437
[ "seems like a very very low-prio improvement :)", "@albertvillanova thanks for the feedback! I removed the formatting altogether since I think this is a bit simpler tor read than non-rendered Markdown" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3437", "html_url": "https://github.com/huggingface/datasets/pull/3437", "diff_url": "https://github.com/huggingface/datasets/pull/3437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3437.patch", "merged_at": "2021-12-17T13:28:25" }
3,437
true
Add the OneStopQa dataset
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
https://github.com/huggingface/datasets/pull/3436
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3436", "html_url": "https://github.com/huggingface/datasets/pull/3436", "diff_url": "https://github.com/huggingface/datasets/pull/3436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3436.patch", "merged_at": "2021-12-17T13:25:29" }
3,436
true
Improve Wikipedia Loading Script
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) Fix #3400 With support from @albertvillanova CC @yjernite
https://github.com/huggingface/datasets/pull/3435
[ "I wanted to flag a change from since we discussed this: I initially wrote a function for using the Wikimedia APIs to collect namespace aliases, but decided that adding in more http requests to the script wasn't a great idea so instead used that code to build a static list that I just added directly to the code.\r\...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3435", "html_url": "https://github.com/huggingface/datasets/pull/3435", "diff_url": "https://github.com/huggingface/datasets/pull/3435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3435.patch", "merged_at": "2022-03-04T08:16:00" }
3,435
true
Add The People's Speech
## Adding a Dataset - **Name:** The People's Speech - **Description:** a massive English-language dataset of audio transcriptions of full sentences. - **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT - **Data:** https://mlcommons.org/en/peoples-speech/ - **Motivation:** With over 30,000 hours of speech, this dataset is the largest and most diverse freely available English speech recognition corpus today. [The article](https://thegradient.pub/new-datasets-to-democratize-speech-recognition-technology-2/) which may be useful when working on the dataset. cc: @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3434
[ "This dataset is now available on the Hub here: https://huggingface.co/datasets/MLCommons/peoples_speech" ]
null
3,434
false
Add Multilingual Spoken Words dataset
## Adding a Dataset - **Name:** Multilingual Spoken Words - **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). Read more: https://mlcommons.org/en/news/spoken-words-blog/ - **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf - **Data:** https://mlcommons.org/en/multilingual-spoken-words/ - **Motivation:** Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3433
[]
null
3,433
false
Correctly indent builder config in dataset script docs
null
https://github.com/huggingface/datasets/pull/3432
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3432", "html_url": "https://github.com/huggingface/datasets/pull/3432", "diff_url": "https://github.com/huggingface/datasets/pull/3432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3432.patch", "merged_at": "2021-12-14T17:35:17" }
3,432
true
Unable to resolve any data file after loading once
when I rerun my program, it occurs this error " Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem? thx. And below is my code . ![image](https://user-images.githubusercontent.com/84694183/146023446-d75fdec8-65c1-484f-80d8-6c20ff5e994b.png)
https://github.com/huggingface/datasets/issues/3431
[ "Hi ! `load_dataset` accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.\r\n\r\nSo here you are getting this error the second time because it tries to load the local `wiki_dpr` directory, instead of `wiki_dpr` from the Hub. It doesn't work since it's a **cache** directory,...
null
3,431
false
Make decoding of Audio and Image feature optional
Add the `decode` argument (`True` by default) to the `Audio` and the `Image` feature to make it possible to toggle on/off decoding of these features. Even though we've discussed that on Slack, I'm not removing the `_storage_dtype` argument of the Audio feature in this PR to avoid breaking the Audio feature tests.
https://github.com/huggingface/datasets/pull/3430
[ "Closing this PR for now due to https://github.com/huggingface/datasets/issues/3145#issuecomment-993664104.", "Okay, after some more thinking, I'm re-opening this PR for three reasons:\r\n* This feature will allow us to remove the `image_file_path`/`audio_file_path` columns in our vision/audio datasets. Currently...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3430", "html_url": "https://github.com/huggingface/datasets/pull/3430", "diff_url": "https://github.com/huggingface/datasets/pull/3430.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3430.patch", "merged_at": "2022-01-25T18:57:52" }
3,430
true
Make cast cacheable (again) on Windows
`cast` currently emits the following warning when called on Windows: ``` Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` It seems like the issue stems from the `config.PYARROW_VERSION` object not being serializable on Windows (tested with `dumps(lambda: config.PYARROW_VERSION)`), so I'm fixing this by capturing `config.PYARROW_VERSION.major` before the lambda definition.
https://github.com/huggingface/datasets/pull/3429
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3429", "html_url": "https://github.com/huggingface/datasets/pull/3429", "diff_url": "https://github.com/huggingface/datasets/pull/3429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3429.patch", "merged_at": "2021-12-14T14:39:50" }
3,429
true
Clean squad dummy data
Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file
https://github.com/huggingface/datasets/pull/3428
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3428", "html_url": "https://github.com/huggingface/datasets/pull/3428", "diff_url": "https://github.com/huggingface/datasets/pull/3428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3428.patch", "merged_at": "2021-12-13T18:57:50" }
3,428
true
Add The Pile Enron Emails subset
Add: - Enron Emails subset of The Pile: "enron_emails" config Close bigscience-workshop/data_tooling#310. CC: @StellaAthena
https://github.com/huggingface/datasets/pull/3427
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3427", "html_url": "https://github.com/huggingface/datasets/pull/3427", "diff_url": "https://github.com/huggingface/datasets/pull/3427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3427.patch", "merged_at": "2021-12-14T17:30:55" }
3,427
true
Update disaster_response_messages download urls (+ add validation split)
Fixes #3240, fixes #3416
https://github.com/huggingface/datasets/pull/3426
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3426", "html_url": "https://github.com/huggingface/datasets/pull/3426", "diff_url": "https://github.com/huggingface/datasets/pull/3426.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3426.patch", "merged_at": "2021-12-14T14:38:29" }
3,426
true
Getting configs names takes too long
## Steps to reproduce the bug ```python from datasets import get_dataset_config_names get_dataset_config_names("allenai/c4") ``` ## Expected results I would expect to get the answer quickly, at least in less than 10s ## Actual results It takes about 45s on my environment ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3425
[ "maybe related to https://github.com/huggingface/datasets/issues/2859\r\n", "It looks like it's currently calling `HfFileSystem.ls()` ~8 times at the root and for each subdirectory:\r\n- \"\"\r\n- \"en.noblocklist\"\r\n- \"en.noclean\"\r\n- \"en\"\r\n- \"multilingual\"\r\n- \"realnewslike\"\r\n\r\nCurrently `ls` ...
null
3,425
false
Add RedCaps dataset
Add the RedCaps dataset. I'm not adding the generated `dataset_infos.json` file for now due to its size (11 MB). TODOs: - [x] dummy data - [x] dataset card Close #3316
https://github.com/huggingface/datasets/pull/3424
[ "Cool ! If you want you can include `dataset_infos.json` but only for the main configurations. That's what we do for example for translation datasets when there are too many configs", "@lhoestq I've added an example that uses `map` to download the images." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3424", "html_url": "https://github.com/huggingface/datasets/pull/3424", "diff_url": "https://github.com/huggingface/datasets/pull/3424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3424.patch", "merged_at": "2022-01-12T14:13:15" }
3,424
true
data duplicate when setting num_works > 1 with streaming data
## Describe the bug The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import pandas as pd import numpy as np import os from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm import shutil NUM_OF_USER = 1000000 NUM_OF_ACTION = 50000 NUM_OF_SEQUENCE = 10000 NUM_OF_FILES = 32 NUM_OF_WORKERS = 16 if __name__ == "__main__": shutil.rmtree("./dataset") for i in range(NUM_OF_FILES): sequence_data = pd.DataFrame( { "imei": np.random.randint(1, NUM_OF_USER, size=NUM_OF_SEQUENCE), "sequence": np.random.randint(1, NUM_OF_ACTION, size=NUM_OF_SEQUENCE) } ) if not os.path.exists("./dataset"): os.makedirs("./dataset") sequence_data.to_csv(f"./dataset/sequence_data_{i}.csv", index=False) dataset = load_dataset("csv", data_files=[os.path.join("./dataset",file) for file in os.listdir("./dataset") if file.endswith(".csv")], split="train", streaming=True).with_format("torch") data_loader = DataLoader(dataset, batch_size=1024, num_workers=NUM_OF_WORKERS) result = pd.DataFrame() for i, batch in tqdm(enumerate(data_loader)): result = pd.concat([result, pd.DataFrame(batch)], axis=0) result.to_csv(f"num_work_{NUM_OF_WORKERS}.csv", index=False) ``` ## Expected results data do not duplicate ## Actual results data duplicate NUM_OF_WORKERS = 16 ![image](https://user-images.githubusercontent.com/16486492/145748707-9d2df25b-2f4f-4d7b-a83e-242be4fc8934.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:datasets==1.14.0 - Platform:transformers==4.11.3 - Python version:3.8 - PyArrow version:
https://github.com/huggingface/datasets/issues/3423
[ "Hi ! Thanks for reporting :)\r\n\r\nWhen using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.\r\n\r\nWe can probably fix this in `datasets` by checking `torch.utils.data.get_...
null
3,423
false
Error about load_metric
## Describe the bug File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python metric = load_metric("glue", "sst2") ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3422
[ "Hi ! I wasn't able to reproduce your error.\r\n\r\nCan you try to clear your cache at `~/.cache/huggingface/modules` and try again ?" ]
null
3,422
false
Adding mMARCO dataset
Adding mMARCO (v1.1) to HF datasets.
https://github.com/huggingface/datasets/pull/3421
[ "Hi @albertvillanova we've made a major overhaul of the loading script including all configurations we're making available. Could you please review it again?", "@albertvillanova :ping_pong: ", "Thanks @lhbonifacio for adding this dataset.\r\nHi there, i got an error about mmarco:\r\nConnectionError: Couldn't re...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3421", "html_url": "https://github.com/huggingface/datasets/pull/3421", "diff_url": "https://github.com/huggingface/datasets/pull/3421.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3421.patch", "merged_at": null }
3,421
true
Add eli5_category dataset
This pull request adds a categorized Long-form question answering dataset `ELI5_Category`. It's a new variant of the [ELI5](https://huggingface.co/datasets/eli5) dataset that uses the Reddit tags to alleviate the training/validation overlapping in the origin ELI5 dataset. A [report](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/)(Section 2) on this dataset.
https://github.com/huggingface/datasets/pull/3420
[ "> Thanks a lot for adding this dataset ! Good job with the dataset card and the dataset scripts - they're really good :)\r\n> \r\n> I just added minor changes\r\n\r\nThanks for fixing typos!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3420", "html_url": "https://github.com/huggingface/datasets/pull/3420", "diff_url": "https://github.com/huggingface/datasets/pull/3420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3420.patch", "merged_at": "2021-12-14T17:53:02" }
3,420
true
`.to_json` is extremely slow after `.select`
## Describe the bug Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset original = load_dataset("squad", split="train") original.to_json("from_original.json") # Takes 0 seconds selected_subset1 = original.select([i for i in range(len(original))]) selected_subset1.to_json("from_select1.json") # Takes 212 seconds selected_subset2 = original.select([i for i in range(int(len(original) / 2))]) selected_subset2.to_json("from_select2.json") # Takes 90 seconds ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: master (https://github.com/huggingface/datasets/commit/6090f3cfb5c819f441dd4a4bb635e037c875b044) - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.0
https://github.com/huggingface/datasets/issues/3419
[ "Hi ! It's slower indeed because a datasets on which `select`/`shard`/`train_test_split`/`shuffle` has been called has to do additional steps to retrieve the data of the dataset table in the right order.\r\n\r\nIndeed, if you call `dataset.select([0, 5, 10])`, the underlying table of the dataset is not altered to k...
null
3,419
false
Add Wikisource dataset
Add loading script for Wikisource dataset. Fix #3399. CC: @geohci, @yjernite
https://github.com/huggingface/datasets/pull/3418
[ "As we are removing the dataset scripts from GitHub and moving them to the Hugging Face Hub, I am going to transfer this script to the repo: https://huggingface.co/datasets/wikimedia/wikisource" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3418", "html_url": "https://github.com/huggingface/datasets/pull/3418", "diff_url": "https://github.com/huggingface/datasets/pull/3418.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3418.patch", "merged_at": null }
3,418
true
Fix type of bridge field in QED
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`. The following paragraph in the QED repo explains the purpose of this field: >Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence. Fix #3346 cc @VictorSanh
https://github.com/huggingface/datasets/pull/3417
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3417", "html_url": "https://github.com/huggingface/datasets/pull/3417", "diff_url": "https://github.com/huggingface/datasets/pull/3417.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3417.patch", "merged_at": "2021-12-14T14:39:05" }
3,417
true
disaster_response_messages unavailable
## Dataset viewer issue for '* disaster_response_messages*' **Link:** https://huggingface.co/datasets/disaster_response_messages Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv Am I the one who added this dataset ?No
https://github.com/huggingface/datasets/issues/3416
[ "Hi, thanks for reporting! This is a duplicate of https://github.com/huggingface/datasets/issues/3240. We are working on a fix.\r\n\r\n" ]
null
3,416
false
Non-deterministic tests: CI tests randomly fail
## Describe the bug Some CI tests fail randomly. 1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol[https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh-zip] FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive - Fi... FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 3 failed, 3553 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 192.79s (0:03:12) = ``` 2. After re-running the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/57bfe1f342cd3c59d2510b992d5f06a0761eb147, there was only 1 failing test (one on Linux and a different one on Windows): - On Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 1 failed, 3555 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 199.76s (0:03:19) = ``` - On Windows: ``` =========================== short test summary info =========================== FAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_without_script = 1 failed, 3551 passed, 2954 skipped, 2 xfailed, 1 xpassed, 121 warnings in 478.58s (0:07:58) = ``` The test `tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped` passes locally. 3. After re-running again the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/39f32f2119cf91b86867216bb5c356c586503c6a, ALL the tests passed.
https://github.com/huggingface/datasets/issues/3415
[ "I think it might come from two different issues:\r\n1. Google Drive is an unreliable host, mainly because of quota limitations\r\n2. the staging environment can sometimes raise some errors\r\n\r\nFor Google Drive tests we could set up some retries with backup URLs if necessary I guess.\r\nFor staging on the other ...
null
3,415
false
Skip None encoding (line deleted by accident in #3195)
Return the line deleted by accident in #3195 while [resolving merge conflicts](https://github.com/huggingface/datasets/pull/3195/commits/8b0ed15be08559056b817836a07d47acda0c4510). Fix #3181 (finally :))
https://github.com/huggingface/datasets/pull/3414
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3414", "html_url": "https://github.com/huggingface/datasets/pull/3414", "diff_url": "https://github.com/huggingface/datasets/pull/3414.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3414.patch", "merged_at": "2021-12-10T11:00:02" }
3,414
true
Add WIDER FACE dataset
Adds the WIDER FACE face detection benchmark. TODOs: * [x] dataset card * [x] dummy data
https://github.com/huggingface/datasets/pull/3413
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3413", "html_url": "https://github.com/huggingface/datasets/pull/3413", "diff_url": "https://github.com/huggingface/datasets/pull/3413.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3413.patch", "merged_at": "2022-01-12T14:13:47" }
3,413
true
Fix flaky test again for s3 serialization
Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985))
https://github.com/huggingface/datasets/pull/3412
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3412", "html_url": "https://github.com/huggingface/datasets/pull/3412", "diff_url": "https://github.com/huggingface/datasets/pull/3412.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3412.patch", "merged_at": "2021-12-09T18:00:52" }
3,412
true
[chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script
## Describe the bug Model I am using (Bert, XLNet ...): bert-base-chinese The problem arises when using: * [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py` The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after `datasets["train"] = load_dataset(...` `len(datasets["train"])` returns `9,265,365` then, after `tokenized_datasets = datasets.map(...` `len(tokenized_datasets["train"])` returns `9,265,279` I'm really confused and tried to trace code by myself but can't know what happened after a week trial. I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask. ## To reproduce Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines. ## Expected behavior I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs. Thanks for your patient reading! ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 3.0.0
https://github.com/huggingface/datasets/issues/3411
[ "@LysandreJik not so sure who to @\r\nCould you help?", "Hi @hyusterr, I believe it is @wlhgtc from https://github.com/huggingface/transformers/pull/9887" ]
null
3,411
false
Fix dependencies conflicts in Windows CI after conda update to 4.11
For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11
https://github.com/huggingface/datasets/pull/3410
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3410", "html_url": "https://github.com/huggingface/datasets/pull/3410", "diff_url": "https://github.com/huggingface/datasets/pull/3410.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3410.patch", "merged_at": "2021-12-09T17:36:19" }
3,410
true
Pass new_fingerprint in multiprocessing
Following https://github.com/huggingface/datasets/pull/3045 Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`. In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when `num_proc>1`. More specifically, `new_fingerprint` with a suffix based on the process `rank` is passed, so that each process has a different `new_fingerprint` cc @TevenLeScao @vlievin
https://github.com/huggingface/datasets/pull/3409
[ "@lhoestq Hi~, does this support that `datasets.map(func, batched=True, batch_size, num_proc>1, new_fingerprint=\"func_v1\")` even if `func` can't pickle. I also notice that you said \"Unfortunately you need picklable mapping functions to make multiprocessing work :confused: Also feel free to open an issue or send ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3409", "html_url": "https://github.com/huggingface/datasets/pull/3409", "diff_url": "https://github.com/huggingface/datasets/pull/3409.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3409.patch", "merged_at": "2021-12-09T17:38:43" }
3,409
true
Typo in Dataset viewer error message
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource" ![Screen Shot 2021-12-09 at 15 31 31](https://user-images.githubusercontent.com/26859204/145415725-9cd728f0-c2c8-4b4e-a8e1-4f4d7841c94a.png) Am I the one who added this dataset ? N/A
https://github.com/huggingface/datasets/issues/3408
[ "Fixed, thanks\r\n<img width=\"661\" alt=\"Capture d’écran 2021-12-22 à 12 02 30\" src=\"https://user-images.githubusercontent.com/1676121/147082881-cf700e8d-0511-4431-b214-d6cf8137db10.png\">\r\n" ]
null
3,408
false
Use max number of data files to infer module
When inferring the module for datasets without script, set a maximum number of iterations over data files. This PR fixes the issue of taking too long when hundred of data files present. Please, feel free to agree on both numbers: ``` # Datasets without script DATA_FILES_MAX_NUMBER = 10 ARCHIVED_DATA_FILES_MAX_NUMBER = 5 ``` Fix #3404.
https://github.com/huggingface/datasets/pull/3407
[ "Cool thanks :) Feel free to merge if it's all good for you" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3407", "html_url": "https://github.com/huggingface/datasets/pull/3407", "diff_url": "https://github.com/huggingface/datasets/pull/3407.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3407.patch", "merged_at": "2021-12-14T17:08:41" }
3,407
true
Fix module inference for archive with a directory
Fix module inference for an archive file that contains files within a directory. Fix #3405.
https://github.com/huggingface/datasets/pull/3406
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3406", "html_url": "https://github.com/huggingface/datasets/pull/3406", "diff_url": "https://github.com/huggingface/datasets/pull/3406.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3406.patch", "merged_at": "2021-12-08T13:03:28" }
3,406
true
ZIP format inference does not work when files located in a dir inside the archive
## Describe the bug When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work. It only works for files located in the root directory of the ZIP file. ## Steps to reproduce the bug ```python infer_module_for_data_files_in_archives(["path/to/zip/file.zip"], False) ```
https://github.com/huggingface/datasets/issues/3405
[]
null
3,405
false
Optimize ZIP format inference
**Is your feature request related to a problem? Please describe.** When hundreds of ZIP files are present in a dataset, format inference takes too long. See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497 **Describe the solution you'd like** Iterate over a maximum number of files. CC: @lhoestq
https://github.com/huggingface/datasets/issues/3404
[]
null
3,404
false
Cannot import name 'maybe_sync'
## Describe the bug Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results No error ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import ( File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module> from ..utils.streaming_download_manager import xopen File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module> from .s3filesystem import S3FileSystem # noqa: F401 File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module> import s3fs File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module> from .core import S3FileSystem, S3File File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module> from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.0 - Platform: OVH Cloud Tesla V100 Machine - Python version: 3.7.9 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3403
[ "Hi ! Can you try updating `fsspec` ? The minimum version is `2021.05.0`", "hey @lhoestq. I'm using `fsspec-2021.11.1` but still getting that error.", "Maybe this discussion can help:\r\n\r\nhttps://github.com/fsspec/filesystem_spec/issues/597#issuecomment-958646964", "Thanks @lhoestq. Downgrading `fsspec and...
null
3,403
false
More robust first elem check in encode/cast example
Fix #3306
https://github.com/huggingface/datasets/pull/3402
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3402", "html_url": "https://github.com/huggingface/datasets/pull/3402", "diff_url": "https://github.com/huggingface/datasets/pull/3402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3402.patch", "merged_at": "2021-12-08T13:02:15" }
3,402
true
Add Wikimedia pre-processed datasets
## Adding a Dataset - **Name:** Add pre-processed data to: - *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia - *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource - **Description:** Add pre-processed data to the Hub for all languages - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** This will be very useful for the NLP community, as the pre-processing has a high cost for lot of researchers (both in computation and in knowledge) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
https://github.com/huggingface/datasets/issues/3401
[]
null
3,401
false
Improve Wikipedia loading script
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions: - _extract_content(filepath): - Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue - _parse_and_clean_wikicode(raw_content, parser): - Remove rm_template from cleaning -- this is redundant with .strip_code() from mwparserformhell - Build a language-specific list of namespace prefixes to filter out per below get_namespace_prefixes - Optional: strip prefixes like categories -- e.g., Category:Towns in Tianjin becomes Towns in Tianjin - Optional: strip magic words
https://github.com/huggingface/datasets/issues/3400
[ "Thanks! See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)", "Closed by:\r\n- #3435" ]
null
3,400
false
Add Wikisource dataset
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual data, besides Wikipedia. Add loading script as "canonical" dataset (as it is the case for ""wikipedia"). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
https://github.com/huggingface/datasets/issues/3399
[ "See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb" ]
null
3,399
false
Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2 This should be done for all pre-processed datasets under "wikimedia" org in the Hub: https://huggingface.co/wikimedia
https://github.com/huggingface/datasets/issues/3398
[ "@geohci, I think the field \"url\" does not appear in the Wikimedia dumps. Therefore I guess we should generate it, using the \"title\" field and making some transformation of it (replacing spaces with underscores) and prepending the domain (created using the language)?", "Indeed:\r\n\r\n> To re-distribute text ...
null
3,398
false
add BNL newspapers
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate pull request to try and make this more complete at a later date. I had to manually add the `dummy_data` but I believe I've done this correctly (the tests pass locally).
https://github.com/huggingface/datasets/pull/3397
[ "\r\n> Also, maybe calling the dataset as \"bnl_historical_newspapers\" and setting \"processed\" as one configuration name?\r\n\r\nThis sounds like a good idea but my only question around this is how easy it would be to use the same approach for processing the other newspaper collections [https://data.bnl.lu/data/...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3397", "html_url": "https://github.com/huggingface/datasets/pull/3397", "diff_url": "https://github.com/huggingface/datasets/pull/3397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3397.patch", "merged_at": "2022-01-17T18:35:34" }
3,397
true
Install Audio dependencies to support audio decoding
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*' **Link:** *https://huggingface.co/datasets/openslr* **Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla* Error: ``` Status code: 400 Exception: ImportError Message: To support decoding audio files, please install 'librosa'. ``` Am I the one who added this dataset ? Yes-No - openslr: No - projecte-aina/parlament_parla: Yes
https://github.com/huggingface/datasets/issues/3396
[ "https://huggingface.co/datasets/projecte-aina/parlament_parla -> works (but we still have to show an audio player)\r\n\r\nhttps://huggingface.co/datasets/openslr -> another issue: `Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/zip:/asr_javanese/data/00/00004fe6aa.flac'`", ...
null
3,396
false
Fix formatting in IterableDataset.map docs
Fix formatting in the recently added `Map` section of the streaming docs.
https://github.com/huggingface/datasets/pull/3395
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3395", "html_url": "https://github.com/huggingface/datasets/pull/3395", "diff_url": "https://github.com/huggingface/datasets/pull/3395.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3395.patch", "merged_at": "2021-12-08T10:11:32" }
3,395
true
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file).
https://github.com/huggingface/datasets/issues/3394
[ "According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded !", "Maybe we can also fix https://github.com/huggingface/datasets/iss...
null
3,394
false
Common Voice Belarusian Dataset
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3393
[]
null
3,393
false
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts` **Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts *short description of the issue* Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603 Am I the one who added this dataset ? No -> @dansbecker
https://github.com/huggingface/datasets/issues/3392
[ "This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n" ]
null
3,392
false
method to select columns
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to create a new dataset with only a list of specified columns. **Describe alternatives you've considered** `.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)` Or `.select(self, indices: Iterable = None, columns: List[str] = None)`
https://github.com/huggingface/datasets/issues/3391
[ "duplicate of #2655" ]
null
3,391
false
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-common-voice-processed How my voxpopuli dataset looks like: ![image](https://user-images.githubusercontent.com/25264037/144895598-b7d9ae91-b04a-4046-9f06-b71ff0824d13.png) Part of the processing (path column is the absolute path to audio files) ``` def add_audio_column(example): example['audio'] = example['path'] return example voxpopuli = voxpopuli.map(add_audio_column) voxpopuli.cast_column("audio", Audio()) voxpopuli["audio"] <-- to my knowledge this does load the local files and prepares those arrays voxpopuli = voxpopuli.cast_column("audio", Audio(sampling_rate=16_000)) resampling 16kHz ``` I have then saved it to disk_ `voxpopuli.save_to_disk('/asr_disk/datasets_processed_new/voxpopuli')` and made folder structure same as @patrickvonplaten I also get same error while trying to load_dataset from his repo: ![image](https://user-images.githubusercontent.com/25264037/144895872-e9b8f326-cf2b-46cf-9417-606a0ce14077.png) ## Steps to reproduce the bug ```python dataset = load_dataset("Finnish-NLP/voxpopuli_fi") ``` ## Expected results Dataset is loaded correctly and looks like in the first picture ## Actual results Loading throws keyError: KeyError: 'Field "builder_name" does not exist in table schema' Resources I have been trying to follow: https://huggingface.co/docs/datasets/audio_process.html https://huggingface.co/docs/datasets/share_dataset.html ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.2.dev0 - Platform: Ubuntu 20.04.2 LTS - Python version: 3.8.12 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3390
[ "Got solved it with push_to_hub, closing" ]
null
3,390
false
Add EDGAR
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. Containing millions of company and individual filings, EDGAR benefits investors, corporations, and the U.S. economy overall by increasing the efficiency, transparency, and fairness of the securities markets. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. EDGAR® and EDGARLink® are registered trademarks of the SEC. - **Data:** https://www.sec.gov/os/accessing-edgar-data - **Motivation:** Enabling and improving FSI (Financial Services Industry) datasets to increase ease of use Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3389
[ "cc @juliensimon ", "Datasets are not tracked in this repository anymore. But you can make your own dataset in the huggingface hub" ]
null
3,389
false
Fix flaky test of the temporary directory used by load_from_disk
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
https://github.com/huggingface/datasets/pull/3388
[ "CI failed because of a server error - merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3388", "html_url": "https://github.com/huggingface/datasets/pull/3388", "diff_url": "https://github.com/huggingface/datasets/pull/3388.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3388.patch", "merged_at": "2021-12-06T11:24:49" }
3,388
true
Create Language Modeling task
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling task can be used to specify the "text" column ("review" in this case). TODO: - [ ] Add the LanguageModeling task to all dataset scripts which can be used for language modeling
https://github.com/huggingface/datasets/pull/3387
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3387", "html_url": "https://github.com/huggingface/datasets/pull/3387", "diff_url": "https://github.com/huggingface/datasets/pull/3387.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3387.patch", "merged_at": "2021-12-17T17:18:27" }
3,387
true
Fix typos in dataset cards
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
https://github.com/huggingface/datasets/pull/3386
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3386", "html_url": "https://github.com/huggingface/datasets/pull/3386", "diff_url": "https://github.com/huggingface/datasets/pull/3386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3386.patch", "merged_at": "2021-12-06T09:30:54" }
3,386
true
None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.** A `torch.utils.data.Dataset.__getitem__` operates on a single example. But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform. **Describe the solution you'd like** Have a `batched=True` argument in `Datasets.with_transform` **Describe alternatives you've considered** * Convert a non-batched transform function to batched one myself. * Wrap a 🤗 Dataset with torch Dataset, and add a `__getitem__`. 🙄 * Have `lazy=False` in `Dataset.map`, and returns a `LazyDataset` if `lazy=True`. This way the same `map` interface can be used, and existing code can be updated with one argument change.
https://github.com/huggingface/datasets/issues/3385
[ "Hi ! Thanks for the suggestion :)\r\nIt makes sense to me, and it can surely be implemented by wrapping the user's function to make it a batched function. However I'm not a big fan of the inconsistency it would create with `map`: `with_transform` is batched by default while `map` isn't.\r\n\r\nIs there something y...
null
3,385
false
Adding mMARCO dataset
We are adding mMARCO dataset to HuggingFace datasets repo. This way, all the languages covered in the translation are available in a easy way.
https://github.com/huggingface/datasets/pull/3384
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3384", "html_url": "https://github.com/huggingface/datasets/pull/3384", "diff_url": "https://github.com/huggingface/datasets/pull/3384.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3384.patch", "merged_at": null }
3,384
true
add Georgian data in cc100.
update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source. All tests are passed. Dummy data generated. metadata generated.
https://github.com/huggingface/datasets/pull/3383
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3383", "html_url": "https://github.com/huggingface/datasets/pull/3383", "diff_url": "https://github.com/huggingface/datasets/pull/3383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3383.patch", "merged_at": "2021-12-14T14:37:22" }
3,383
true
#3337 Add typing overloads to Dataset.__getitem__ for mypy
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Flake8. datasets uses flake8==3.7.9 which released in October 2019 if I update flake8 (4.0.1), I no longer get these errors, but I did not want to make the update without your approval. (It also triggers other errors like no args in f-strings.)
https://github.com/huggingface/datasets/pull/3382
[ "Locally the `make quality` passes with the same dependencies. I would suggest upgrading flake8. (I can take care of it in another PR)\r\ncc @lhoestq ", "Thank you for fixing flake8! I think we are ready to merge then. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3382", "html_url": "https://github.com/huggingface/datasets/pull/3382", "diff_url": "https://github.com/huggingface/datasets/pull/3382.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3382.patch", "merged_at": "2021-12-14T10:28:54" }
3,382
true
Unable to load audio_features from common_voice dataset
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) ``` ## Expected results This piece of code should return test_dataset after loading audio features. ## Actual results Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory 0%| | 0/3 [00:00<?, ?ex/s] Traceback (most recent call last): File "demo_file.py", line 23, in <module> test_dataset = test_dataset.map(speech_file_to_array_fn) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map desc=desc, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "demo_file.py", line 19, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load filepath, frame_offset, num_frames, normalize, channels_first, format) RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3 ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3381
[ "Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)", "Thanks for...
null
3,381
false
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week! If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://hf.co/oss-survey) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! 🤗
https://github.com/huggingface/datasets/issues/3380
[]
null
3,380
false
iter_archive on zipfiles with better compression type check
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_extraction_protocol_local`: **I removed this part :** ```python elif path.endswith(".tar.gz") or path.endswith(".tgz"): raise NotImplementedError( f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead." ) ``` **And also changed :** ```diff - extension = path.split(".")[-1] + extension = "tar" if path.endswith(".tar.gz") else path.split(".")[-1] ``` The reason for this is a compression like **.tar.gz** will be considered a **.gz** which is handled with **zipfile**, though **tar.gz** can only be opened using **tarfile**. Please tell me if there's anything to change. # Tasks : - [x] download_manager.py - [x] streaming_download_manager.py
https://github.com/huggingface/datasets/pull/3379
[ "Hello @lhoestq, thank you for your answer.\r\n\r\nI don't use pytest a lot so I think I might need some help on it :) but I tried some tests for `streaming_download_manager.py` only. I don't know how to test `download_manager.py` since we need to use local files.\r\n\r\n# Comments : \r\n* In **download_manager.py*...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3379", "html_url": "https://github.com/huggingface/datasets/pull/3379", "diff_url": "https://github.com/huggingface/datasets/pull/3379.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3379.patch", "merged_at": "2023-01-24T12:53:08" }
3,379
true
Add The Pile subsets
Add The Pile subsets: - pubmed - ubuntu_irc - europarl - hacker_news - nih_exporter Close bigscience-workshop/data_tooling#301. CC: @StellaAthena
https://github.com/huggingface/datasets/pull/3378
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3378", "html_url": "https://github.com/huggingface/datasets/pull/3378", "diff_url": "https://github.com/huggingface/datasets/pull/3378.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3378.patch", "merged_at": "2021-12-09T18:11:23" }
3,378
true
COCO 🥥 on the 🤗 Hub?
This is a draft PR since I ran into few small problems. I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py cc: @mariosasko
https://github.com/huggingface/datasets/pull/3377
[ "@mariosasko I fixed couple of bugs", "TO-DO: \r\n- [x] Add unlabeled 2017 splits, train and validation splits of 2015\r\n- [x] Add Class Labels as list instead", "@mariosasko added fine & coarse grained labels, will fix the bugs (currently getting set up with VM, my internet is too slow to run the tests and do...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3377", "html_url": "https://github.com/huggingface/datasets/pull/3377", "diff_url": "https://github.com/huggingface/datasets/pull/3377.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3377.patch", "merged_at": null }
3,377
true
Update clue benchmark
Fix #3374
https://github.com/huggingface/datasets/pull/3376
[ "The CI error is due to missing tags in the CLUE dataset card - merging !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3376", "html_url": "https://github.com/huggingface/datasets/pull/3376", "diff_url": "https://github.com/huggingface/datasets/pull/3376.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3376.patch", "merged_at": "2021-12-08T14:14:41" }
3,376
true
Support streaming zipped dataset repo by passing only repo name
Proposed solution: - I have added the method `iter_files` to DownloadManager and StreamingDownloadManager - I use this in modules: "csv", "json", "text" - I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes Fix #3373.
https://github.com/huggingface/datasets/pull/3375
[ "I just tested and I think this only opens one file ? If there are several files in the ZIP, only the first one is opened. To open several files from a ZIP, one has to call `open` several times.\r\n\r\nWhat about updating the CSV loader to make it `download_and_extract` zip files, and open each extracted file ?", ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3375", "html_url": "https://github.com/huggingface/datasets/pull/3375", "diff_url": "https://github.com/huggingface/datasets/pull/3375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3375.patch", "merged_at": "2021-12-16T18:03:31" }
3,375
true
NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews
Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error.
https://github.com/huggingface/datasets/issues/3374
[ "Seems like the issue still exists,:\r\n`Downloading and preparing dataset clue/chid (download: 127.15 MiB, generated: 259.71 MiB, post-processed: Unknown size, total: 386.86 MiB) to /mnt/cache/tanhaochen/.cache/huggingface/datasets/clue/chid/1.0.0/e55b490cb7809dcd8db31b9a87119f2e2ec87cdc060da8a9ac070b070ca3e379......
null
3,374
false
Support streaming zipped CSV dataset repo by passing only repo name
Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`: ``` ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab" ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True) item = next(iter(ds)) ``` Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL: ``` 'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip' ```
https://github.com/huggingface/datasets/issues/3373
[]
null
3,373
false
[SEO improvement] Add Dataset Metadata to make datasets indexable
Some people who host datasets on github seem to include a table of metadata at the end of their README.md to make the dataset indexable by [Google Dataset Search](https://datasetsearch.research.google.com/) (See [here](https://github.com/google-research/google-research/tree/master/goemotions#dataset-metadata) and [here](https://github.com/cvdfoundation/google-landmark#dataset-metadata)). This could be a useful addition to canonical datasets; perhaps even community datasets. I'll include a screenshot (as opposed to markdown) as an example so as not to have a github issue indexed as a dataset: > ![image](https://user-images.githubusercontent.com/3664563/144496173-953428cf-633a-4571-b75b-f099c6b2ed65.png) **_PS: It might very well be the case that this is already covered by some other markdown magic I'm not aware of._**
https://github.com/huggingface/datasets/issues/3372
[]
null
3,372
false
New: Americas NLI dataset
This PR adds the [Americas NLI](https://arxiv.org/abs/2104.08726) dataset, extension of XNLI to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. One odd thing (not sure) is that I had to set `datasets-cli dummy_data ./datasets/americas_nli/ --auto_generate --n_lines 7500` `n_lines` very large to successfully generate the dummy files for all the subsets. Happy to get some guidance here. Otherwise, I hope everything is in order :) e: missed a step, onto fixing the tests e2: there you go -- hope it's ok to have added more languages with their ISO codes to `languages.json`, need those tests to pass :laughing:
https://github.com/huggingface/datasets/pull/3371
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3371", "html_url": "https://github.com/huggingface/datasets/pull/3371", "diff_url": "https://github.com/huggingface/datasets/pull/3371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3371.patch", "merged_at": "2021-12-08T13:58:11" }
3,371
true
Document a training loop for streaming dataset
I added some docs about streaming dataset. In particular I added two subsections: - one on how to use `map` for preprocessing - one on how to use a streaming dataset in a pytorch training loop cc @patrickvonplaten @stevhliu if you have some comments cc @Rocketknight1 later we can add the one for TF and I might need your help ^^'
https://github.com/huggingface/datasets/pull/3370
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3370", "html_url": "https://github.com/huggingface/datasets/pull/3370", "diff_url": "https://github.com/huggingface/datasets/pull/3370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3370.patch", "merged_at": "2021-12-03T13:34:34" }
3,370
true
[Audio] Allow resampling for audio datasets in streaming mode
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows: ```python from datasets import load_dataset ds = load_dataset("common_voice", "ab", split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ``` However in streaming mode it fails currently: ```python from datasets import load_dataset ds = load_dataset("common_voice", "ab", split="test", streaming=True) ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ``` with the following error: ``` AttributeError: 'IterableDataset' object has no attribute 'cast_column' ``` It would be great if we could add such a feature (I'm not 100% sure though how complex this would be)
https://github.com/huggingface/datasets/issues/3369
[ "This requires implementing `cast_column` for iterable datasets, it could be a very nice addition !\r\n\r\n<s>It can also be useful to be able to disable the audio/image decoding for the dataset viewer (see PR https://github.com/huggingface/datasets/pull/3430) cc @severo </s>\r\nEDIT: actually following https://git...
null
3,369
false
Fix dict source_datasets tagset validator
Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys. This PR: - Extends `tagset_validator` to support regex tags - Uses `tagset_validator` to validate dict `source_datasets`
https://github.com/huggingface/datasets/pull/3368
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3368", "html_url": "https://github.com/huggingface/datasets/pull/3368", "diff_url": "https://github.com/huggingface/datasets/pull/3368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3368.patch", "merged_at": "2021-12-02T15:48:37" }
3,368
true
Fix typo in other-structured-to-text task tag
Fix typo in task tag: - `other-stuctured-to-text` (before) - `other-structured-to-text` (now)
https://github.com/huggingface/datasets/pull/3367
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3367", "html_url": "https://github.com/huggingface/datasets/pull/3367", "diff_url": "https://github.com/huggingface/datasets/pull/3367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3367.patch", "merged_at": "2021-12-02T16:07:13" }
3,367
true
Add multimodal datasets
Epic issue to track the addition of multimodal datasets: - [ ] #2526 - [x] #1842 - [ ] #1810 Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). @VictorSanh feel free to add and sort by priority any interesting dataset. I have added the multimodal dataset requests which were already present as issues.
https://github.com/huggingface/datasets/issues/3366
[]
null
3,366
false
Add task tags for multimodal datasets
## **Is your feature request related to a problem? Please describe.** Currently, task tags are either exclusively related to text or speech processing: - https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json ## **Describe the solution you'd like** We should also add tasks related to: - multimodality - image - video CC: @VictorSanh @lewtun @lhoestq @merveenoyan @SBrandeis
https://github.com/huggingface/datasets/issues/3365
[]
null
3,365
false
Use the Audio feature in the AutomaticSpeechRecognition template
This updates the ASR template and all supported datasets to use the `Audio` feature
https://github.com/huggingface/datasets/pull/3364
[ "Cool !\r\n\r\nI noticed that you removed the `audio_file_path_column` field of the template, note that you also have to update all the dataset_infos.json file that still contain this outdated field. For example in the common_voice you can find this:\r\n```\r\n\"task_templates\": [{\"task\": \"automatic-speech-reco...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3364", "html_url": "https://github.com/huggingface/datasets/pull/3364", "diff_url": "https://github.com/huggingface/datasets/pull/3364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3364.patch", "merged_at": null }
3,364
true
Update URL of Jeopardy! dataset
Updates the URL of the Jeopardy! dataset. Fix #3361
https://github.com/huggingface/datasets/pull/3363
[ "Closing this PR in favor of #3266.", "I think you should also close this branch" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3363", "html_url": "https://github.com/huggingface/datasets/pull/3363", "diff_url": "https://github.com/huggingface/datasets/pull/3363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3363.patch", "merged_at": null }
3,363
true
Adapt image datasets
This PR: * adapts the ImageClassification template to use the new Image feature * adapts the following datasets to use the new Image feature: * beans (+ fixes streaming) * cast_vs_dogs (+ fixes streaming) * cifar10 * cifar100 * fashion_mnist * mnist * head_qa cc @nateraw
https://github.com/huggingface/datasets/pull/3362
[ "This PR can be merged after #3163 is merged (this PR is pretty big because I was working on the forked branch).\r\n\r\n@lhoestq @albertvillanova Could you please take a look at the changes in `src/datasets/utils/streaming_download_manager.py`? These changes were required to support streaming of the `cats_vs_dogs` ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3362", "html_url": "https://github.com/huggingface/datasets/pull/3362", "diff_url": "https://github.com/huggingface/datasets/pull/3362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3362.patch", "merged_at": "2021-12-09T18:37:41" }
3,362
true
Jeopardy _URL access denied
## Describe the bug http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now. However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/ may work. ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> load_dataset("jeopardy") ``` ## Expected results The download completes. ## Actual results ```shell Downloading: 4.18kB [00:00, 1.60MB/s] Downloading: 2.03kB [00:00, 1.04MB/s] Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /Users/mike/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` --- ```shell > curl http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>70Y9R36XNPEQXMGV</RequestId><HostId>G6F5AK4qo7JdaEdKGMtS0P6gdLPeFOdEfSEfvTOZEfk9km0/jAfp08QLfKSTFFj1oWIKoAoBehM=</HostId></Error> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
https://github.com/huggingface/datasets/issues/3361
[ "Just a side note: duplicate #3264" ]
null
3,361
false
Add The Pile USPTO subset
Add: - USPTO subset of The Pile: "uspto" config Close bigscience-workshop/data_tooling#297. CC: @StellaAthena
https://github.com/huggingface/datasets/pull/3360
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3360", "html_url": "https://github.com/huggingface/datasets/pull/3360", "diff_url": "https://github.com/huggingface/datasets/pull/3360.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3360.patch", "merged_at": "2021-12-03T11:45:27" }
3,360
true
Add The Pile Free Law subset
Add: - Free Law subset of The Pile: "free_law" config Close bigscience-workshop/data_tooling#75. CC: @StellaAthena
https://github.com/huggingface/datasets/pull/3359
[ "@albertvillanova Is there a specific reason you’re adding the Pile under “the” instead of under “pile”? That does not appear to be consistent with other datasets.", "Hi @StellaAthena,\r\n\r\nI asked myself the same question, but at the end I decided to be consistent with previously added Pile subsets:\r\n- #2817...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3359", "html_url": "https://github.com/huggingface/datasets/pull/3359", "diff_url": "https://github.com/huggingface/datasets/pull/3359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3359.patch", "merged_at": "2021-12-01T17:30:43" }
3,359
true
add new field, and get errors
after adding new field **tokenized_examples["example_id"]**, and get errors below, I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list **all fields** ``` ***************** train_dataset 1: Dataset({ features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'], num_rows: 87714 }) ``` **Errors** ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions 'str' ```
https://github.com/huggingface/datasets/issues/3358
[ "Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ", "> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok." ]
null
3,358
false
Update languages in aeslc dataset card
After having worked a bit with the dataset. As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate).
https://github.com/huggingface/datasets/pull/3357
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3357", "html_url": "https://github.com/huggingface/datasets/pull/3357", "diff_url": "https://github.com/huggingface/datasets/pull/3357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3357.patch", "merged_at": "2022-09-23T13:16:48" }
3,357
true
to_tf_dataset() refactor
This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are: - A collator is always required (there was way too much hackiness making things like labels work without it) - Lots of cleanup and a lot of code moved to `_get_output_signature` - Should now handle it gracefully when the data collator adds unexpected columns
https://github.com/huggingface/datasets/pull/3356
[ "Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!", "Hi @lhoestq @mariosasko, the other PRs this was depending on in Transformers and huggingface/notebooks are now merged, so thi...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3356", "html_url": "https://github.com/huggingface/datasets/pull/3356", "diff_url": "https://github.com/huggingface/datasets/pull/3356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3356.patch", "merged_at": "2021-12-09T10:26:53" }
3,356
true
Extend support for streaming datasets that use pd.read_excel
This PR fixes error: ``` ValueError: Cannot seek streaming HTTP file ``` CC: @severo
https://github.com/huggingface/datasets/pull/3355
[ "TODO in the future: https://github.com/huggingface/datasets/pull/3355#discussion_r761138011\r\n- If we finally find a use case where the `pd.read_excel()` can work in streaming mode (using fsspec), that is, without using the `.read()`, I propose to try this first, catch the ValueError and then try with `.read`, bu...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3355", "html_url": "https://github.com/huggingface/datasets/pull/3355", "diff_url": "https://github.com/huggingface/datasets/pull/3355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3355.patch", "merged_at": "2021-12-17T07:24:18" }
3,355
true
Remove duplicate name from dataset cards
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
https://github.com/huggingface/datasets/pull/3354
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3354", "html_url": "https://github.com/huggingface/datasets/pull/3354", "diff_url": "https://github.com/huggingface/datasets/pull/3354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3354.patch", "merged_at": "2021-12-01T13:14:29" }
3,354
true
add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs ``` *********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2106, ..., 0, 0, 0], ..., [ 101, 2339, 2001, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} ``` ``` # This function preprocesses a question answering dataset, tokenizing the question and context text # and finding the right offsets for the answer spans in the tokenized context (to use as labels). # Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None): questions = [q.lstrip() for q in examples["question"]] max_seq_length = tokenizer.model_max_length # tokenize both questions and the corresponding context # if the context length is longer than max_length, we split it to several # chunks of max_length tokenized_examples = tokenizer( questions, examples["context"], truncation="only_second", max_length=max_seq_length, stride=min(max_seq_length // 2, 128), return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length" ) # Since one example might give us several features if it has a long context, # we need a map from a feature to its corresponding example. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position # in the original context. This will help us compute the start_positions # and end_positions to get the final answer string. offset_mapping = tokenized_examples.pop("offset_mapping") tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] tokenized_examples["example_id"] = [] for i, offsets in enumerate(offset_mapping): input_ids = tokenized_examples["input_ids"][i] # We will label features not containing the answer the index of the CLS token. cls_index = input_ids.index(tokenizer.cls_token_id) sequence_ids = tokenized_examples.sequence_ids(i) # from the feature idx to sample idx sample_index = sample_mapping[i] # get the answer for a feature answers = examples["answers"][sample_index] tokenized_examples["example_id"].append(examples["id"][sample_index]) if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != 1: token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and \ offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append( token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples ``` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_
https://github.com/huggingface/datasets/issues/3353
[ "Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the mod...
null
3,353
false
Make LABR dataset streamable
Fix LABR dataset to make it streamable. Related to: #3350.
https://github.com/huggingface/datasets/pull/3352
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3352", "html_url": "https://github.com/huggingface/datasets/pull/3352", "diff_url": "https://github.com/huggingface/datasets/pull/3352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3352.patch", "merged_at": "2021-12-01T10:49:01" }
3,352
true
Add VCTK dataset
Fixes #1837.
https://github.com/huggingface/datasets/pull/3351
[ "Hello @patrickvonplaten, I hope it's okay to ping you with a (dumb) question!\r\n\r\nI've been trying to get `dl_manager.download_and_extract(_DL_URL)` to work with no avail. I verified that this is a problem on two different machines (lab server, GCP), so I doubt it's an issue with network connectivity. Here is t...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3351", "html_url": "https://github.com/huggingface/datasets/pull/3351", "diff_url": "https://github.com/huggingface/datasets/pull/3351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3351.patch", "merged_at": "2021-12-28T15:05:07" }
3,351
true
Avoid content-encoding issue while streaming datasets
This PR will fix streaming of datasets served with gzip content-encoding: ``` ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` Fix #2918. CC: @severo
https://github.com/huggingface/datasets/pull/3350
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3350", "html_url": "https://github.com/huggingface/datasets/pull/3350", "diff_url": "https://github.com/huggingface/datasets/pull/3350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3350.patch", "merged_at": "2021-12-01T08:15:00" }
3,350
true
raise exception instead of using assertions.
fix for the remaining files https://github.com/huggingface/datasets/issues/3171
https://github.com/huggingface/datasets/pull/3349
[ "@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. ", "@mariosasko - Hello, Are there any additional changes required from my end??. Wondering if this PR can be merged or still pending on additional steps.", "@mariosasko - The approved changes in the PR now has conflicts with th...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3349", "html_url": "https://github.com/huggingface/datasets/pull/3349", "diff_url": "https://github.com/huggingface/datasets/pull/3349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3349.patch", "merged_at": "2021-12-20T16:07:27" }
3,349
true
BLEURT: Match key names to correspond with filename
In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235
https://github.com/huggingface/datasets/pull/3348
[ "Thanks for the suggestion! I think the current checked-in `CHECKPOINT_URLS` is already not working. I believe anyone who tried using the new ckpts (`BLEURT-20-X`) can't unless this fix is in. The zip file from bleurt side unzips to directory name matching the filename (capitalized for new ones). For example withou...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3348", "html_url": "https://github.com/huggingface/datasets/pull/3348", "diff_url": "https://github.com/huggingface/datasets/pull/3348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3348.patch", "merged_at": "2021-12-07T16:06:57" }
3,348
true