id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,148,186,272
3,780
Add ElkarHizketak v1.0 dataset
null
closed
https://github.com/huggingface/datasets/pull/3780
2022-02-23T14:44:17
2022-03-04T19:04:29
2022-03-04T19:04:29
{ "login": "antxa", "id": 7646055, "type": "User" }
[]
true
[]
1,148,050,636
3,779
Update manual download URL in newsroom dataset
Fix #3778.
closed
https://github.com/huggingface/datasets/pull/3779
2022-02-23T12:49:07
2022-02-23T13:26:41
2022-02-23T13:26:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,147,898,946
3,778
Not be able to download dataset - "Newsroom"
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help! Thanks Darshan Tank
closed
https://github.com/huggingface/datasets/issues/3778
2022-02-23T10:15:50
2022-02-23T17:05:04
2022-02-23T13:26:40
{ "login": "Darshan2104", "id": 61326242, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,147,232,875
3,777
Start removing canonical datasets logic
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now) - the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library - the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist Would love to have your feedbacks on this ! cc @julien-c @thomwolf @SBrandeis
closed
https://github.com/huggingface/datasets/pull/3777
2022-02-22T18:23:30
2022-02-24T15:04:37
2022-02-24T15:04:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,146,932,871
3,776
Allow download only some files from the Wikipedia dataset
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`. **Describe alternatives you've considered** I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
open
https://github.com/huggingface/datasets/issues/3776
2022-02-22T13:46:41
2022-02-22T14:50:02
null
{ "login": "jvanz", "id": 1514798, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,146,849,454
3,775
Update gigaword card and info
Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999
closed
https://github.com/huggingface/datasets/pull/3775
2022-02-22T12:27:16
2022-02-28T11:35:24
2022-02-28T11:35:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,146,843,177
3,774
Fix reddit_tifu data URL
Fix #3773.
closed
https://github.com/huggingface/datasets/pull/3774
2022-02-22T12:21:15
2022-02-22T12:38:45
2022-02-22T12:38:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,146,758,335
3,773
Checksum mismatch for the reddit_tifu dataset
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3773
2022-02-22T10:57:07
2022-02-25T19:27:49
2022-02-22T12:38:44
{ "login": "anna-kay", "id": 56791604, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,146,718,630
3,772
Fix: dataset name is stored in keys
null
closed
https://github.com/huggingface/datasets/pull/3772
2022-02-22T10:20:37
2022-02-22T11:08:34
2022-02-22T11:08:33
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,146,561,140
3,771
Fix DuplicatedKeysError on msr_sqa dataset
Fix #3770.
closed
https://github.com/huggingface/datasets/pull/3771
2022-02-22T07:44:24
2022-02-22T08:12:40
2022-02-22T08:12:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,146,336,667
3,770
DuplicatedKeysError on msr_sqa dataset
### Describe the bug Failure to generate dataset msr_sqa because of duplicate keys. ### Steps to reproduce the bug ``` from datasets import load_dataset load_dataset("msr_sqa") ``` ### Expected results The examples keys should be unique. **Actual results** ``` >>> load_dataset("msr_sqa") Downloading: 6.72k/? [00:00<00:00, 148kB/s] Downloading: 2.93k/? [00:00<00:00, 53.8kB/s] Using custom data configuration default Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1... Downloading: 100% 4.80M/4.80M [00:00<00:00, 7.49MB/s] --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator) 1080 example = self.info.features.encode_example(record) -> 1081 writer.write(example, key) 1082 finally: 8 frames DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: DuplicatedKeysError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self) 449 for hash, key in self.hkey_record: 450 if hash in tmp_record: --> 451 raise DuplicatedKeysError(key) 452 else: 453 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: nt-639 Keys should be unique and deterministic in nature ``` ### Environment info datasets version: 1.18.3 Platform: Google colab notebook Python version: 3.7 PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3770
2022-02-22T00:43:33
2022-02-22T08:12:39
2022-02-22T08:12:39
{ "login": "kolk", "id": 9049591, "type": "User" }
[]
false
[]
1,146,258,023
3,769
`dataset = dataset.map()` causes faiss index lost
## Describe the bug assigning the resulted dataset to original dataset causes lost of the faiss index ## Steps to reproduce the bug `my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure ```python self.dataset.add_faiss_index('embeddings') self.dataset.list_indexes() # ['embeddings'] dataset2 = my_dataset.map( lambda x: self._get_nearest_examples_batch(x['text']), batch=True ) # the unexpected result: dataset2.list_indexes() # [] self.dataset.list_indexes() # ['embeddings'] ``` in case something wrong with my `_get_nearest_examples_batch()`, it's like this ```python def _get_nearest_examples_batch(self, examples, k=5): queries = embed(examples) scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k) return { 'neighbors': [batch['text'] for batch in retrievals_batch], 'scores': scores_batch } ``` ## Expected results `map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset ## Actual results map drops the indexes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.12 - PyArrow version: 7.0.0
open
https://github.com/huggingface/datasets/issues/3769
2022-02-21T21:59:23
2022-06-27T14:56:29
null
{ "login": "Oaklight", "id": 13076552, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,146,102,442
3,768
Fix HfFileSystem docstring
null
closed
https://github.com/huggingface/datasets/pull/3768
2022-02-21T18:14:40
2022-02-22T09:13:03
2022-02-22T09:13:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,146,036,648
3,767
Expose method and fix param
A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670
closed
https://github.com/huggingface/datasets/pull/3767
2022-02-21T16:57:47
2022-02-22T08:35:03
2022-02-22T08:35:02
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,145,829,289
3,766
Fix head_qa data URL
Fix #3758.
closed
https://github.com/huggingface/datasets/pull/3766
2022-02-21T13:52:50
2022-02-21T14:39:20
2022-02-21T14:39:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,145,126,881
3,765
Update URL for tagging app
This PR updates the URL for the tagging app to be the one on Spaces.
closed
https://github.com/huggingface/datasets/pull/3765
2022-02-20T20:34:31
2022-02-20T20:36:10
2022-02-20T20:36:06
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,145,107,050
3,764
!
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3764
2022-02-20T19:05:43
2022-02-21T08:55:58
2022-02-21T08:55:58
{ "login": "LesiaFedorenko", "id": 77545307, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,145,099,878
3,763
It's not possible download `20200501.pt` dataset
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect to download the dataset locally. ## Actual results ``` >>> from datasets import load_dataset >>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475... /home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features. warnings.warn( 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare super()._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested mapped = [ File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested return function(data_struct) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json ``` ## Environment info ``` - `datasets` version: 1.18.3 - Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1 ```
closed
https://github.com/huggingface/datasets/issues/3763
2022-02-20T18:34:58
2022-02-21T12:06:12
2022-02-21T09:25:06
{ "login": "jvanz", "id": 1514798, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,144,849,557
3,762
`Dataset.class_encode` should support custom class names
I can make a PR, just wanted approval before starting. **Is your feature request related to a problem? Please describe.** It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing. https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235 **Describe the solution you'd like** I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values. **Describe alternatives you've considered** One can use map instead. I find it harder to read. ```python CLASS_NAMES = ['apple', 'orange', 'potato'] ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column])) # Proposition ds = ds.class_encode_column(label_column, CLASS_NAMES) ``` **Additional context** I can make the PR if this feature is accepted.
closed
https://github.com/huggingface/datasets/issues/3762
2022-02-19T21:21:45
2022-02-21T12:16:35
2022-02-21T12:16:35
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,144,830,702
3,761
Know your data for HF hub
**Is your feature request related to a problem? Please describe.** Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues. **Describe the solution you'd like** Something like https://knowyourdata.withgoogle.com/ for HF hub
closed
https://github.com/huggingface/datasets/issues/3761
2022-02-19T19:48:47
2022-02-21T14:15:23
2022-02-21T14:15:23
{ "login": "Muhtasham", "id": 20128202, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,144,804,558
3,760
Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*' **Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)* *with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://huggingface.co/spaces/kingabzpro/savtadepth.* Am I the one who added this dataset ? Yes
closed
https://github.com/huggingface/datasets/issues/3760
2022-02-19T17:45:08
2022-03-22T07:12:11
2022-03-22T07:12:11
{ "login": "kingabzpro", "id": 36753484, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,143,400,770
3,759
Rename GenerateMode to DownloadMode
This PR: - Renames `GenerateMode` to `DownloadMode` - Implements `DeprecatedEnum` - Deprecates `GenerateMode` Close #769.
closed
https://github.com/huggingface/datasets/pull/3759
2022-02-18T16:53:53
2022-02-22T13:57:24
2022-02-22T12:22:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,143,366,393
3,758
head_qa file missing
## Describe the bug A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json) ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("head_qa", name="en") ``` ## Expected results The dataset should be loaded ## Actual results ``` Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Downloading data: 2.21kB [00:00, 2.05MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] ``` ## Environment info - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3758
2022-02-18T16:32:43
2022-02-28T14:29:18
2022-02-21T14:39:19
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,143,300,880
3,757
Add perplexity to metrics
Adding perplexity metric This code differs from the code in [this](https://huggingface.co/docs/transformers/perplexity) HF blog post because the blogpost code fails in at least the following circumstances: - returns nans whenever the stride = 1 - hits a runtime error when the stride is significantly larger than the max model length (e.g. if max_model_length = 512 and stride = 1024) Note that: - As it is, it only works for causal models. Pseudoperplexity can be added later as another metric to work with masked language models. - It takes in a list of strings so that it can be dataset independent. This does mean that it doesn't currently batch inputs, and is therefore relatively slow. - It overwrites the metrics compute() function for a specific perplexity compute() function. This is because the current general metrics compute() function requires model-generated predictions, which doesn't make sense in the context of perplexity
closed
https://github.com/huggingface/datasets/pull/3757
2022-02-18T15:52:23
2022-02-25T17:13:34
2022-02-25T17:13:34
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,143,273,825
3,756
Images get decoded when using `map()` with `input_columns` argument on a dataset
## Describe the bug The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances. However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image data is passed as raw byte representation to the mapping function. ## Steps to reproduce the bug ```python from datasets import load_dataset from torchvision import transforms from PIL.Image import Image dataset = load_dataset('mnist', split='train') def transform_all_columns(example): # example['image'] is encoded as PIL Image assert isinstance(example['image'], Image) return example def transform_image_column(image): # image is decoded here and represented as raw bytes assert isinstance(image, Image) return image # single-sample dataset for debugging purposes dev = dataset.select([0]) dev.map(transform_all_columns) dev.map(transform_image_column, input_columns='image') ``` ## Expected results Image data should be passed in decoded form, i.e. as PIL Image objects to the mapping function unless the `decode` attribute on the image feature is set to `False`. ## Actual results The mapping function receives images as raw byte data. ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.32 - Python version: 3.8.0b4 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3756
2022-02-18T15:35:38
2022-12-13T16:59:06
2022-12-13T16:59:06
{ "login": "kklemon", "id": 1430243, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,143,032,961
3,755
Cannot preview dataset
## Dataset viewer issue for '*rubrix/news*' **Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page* Cannot see the dataset preview: ``` Status code: 400 Exception: Status400Error Message: Not found. Cache is waiting to be refreshed. ``` Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/3755
2022-02-18T13:06:45
2022-02-19T14:30:28
2022-02-18T15:41:33
{ "login": "frascuchon", "id": 2518789, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,142,886,536
3,754
Overflowing indices in `select`
## Describe the bug The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"test": [1,2,3]}) ds = ds.select(range(5)) print(ds) print() print(ds["test"]) ``` Result: ```python Dataset({ features: ['test'], num_rows: 5 }) [1, 2, 3, 1, 2] ``` This behaviour is not documented and can lead to unexpected behaviour when for example taking a sample larger than the dataset and thus creating a lot of duplicates. ## Expected results It think this should throw an error or at least a very big warning: ```python IndexError: Invalid key: 5 is out of bounds for size 3 ``` ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.0.1-x86_64-i386-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3754
2022-02-18T11:30:52
2022-02-18T11:38:23
2022-02-18T11:38:23
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,142,821,144
3,753
Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode. ## `filter` for `IterableDataset` Adding filtering to streaming datasets would be useful in several scenarios: - filter a dataset with many languages for a subset of languages - filter a dataset for specific licenses - other custom logic to get a subset The only way to achieve this at the moment is I think through writing a custom loading script and implementing filters there. ## `IterableDataset` to `Dataset` conversion In combination with the above filter a functionality to "play" the whole stream would be useful. The motivation is that often one might filter the dataset to get a manageable size for experimentation. In that case streaming mode is no longer necessary as the filtered dataset is small enough and it would be useful to be able to play through the whole stream to create a normal `Dataset` with all its benefits. ```python ds = load_dataset("some_large_dataset", streaming=True) ds_filter = ds.filter(lambda x: x["lang"]="fr") ds_filter = ds_filter.stream() # here the `IterableDataset` is converted to a `Dataset` ``` Naturally, this could be expanded with `stream(n=1000)` which creates a `Dataset` with the first `n` elements similar to `take`. ## Stream to the Hub While streaming allows to use a dataset as is without saving the whole dataset on the local machine it is currently not possible to process a dataset and add it to the hub. The only way to do this is by downloading the full dataset and saving the processed dataset again before pushing them to the hub. The API could looks something like: ```python ds = load_dataset("some_large_dataset", streaming=True) ds_filter = ds.filter(some_filter_func) ds_processed = ds_filter.map(some_processing_func) ds_processed.push_to_hub("new_better_dataset", batch_size=100_000) ``` Under the hood this could be done by processing and aggregating `batch_size` elements and then pushing that batch as a single file to the hub. With this functionality one could process and create TB scale datasets while only requiring size of `batch_size` local disk space. cc @lhoestq @albertvillanova
open
https://github.com/huggingface/datasets/issues/3753
2022-02-18T10:45:41
2025-03-19T14:50:14
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,142,627,889
3,752
Update metadata JSON for cats_vs_dogs dataset
Note that the number of examples in the train split was already fixed in the dataset card. Fix #3750.
closed
https://github.com/huggingface/datasets/pull/3752
2022-02-18T08:32:53
2022-02-18T14:56:12
2022-02-18T14:56:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,142,609,327
3,751
Fix typo in train split name
In the README guide (and consequently in many datasets) there was a typo in the train split name: ``` | Tain | Valid | Test | ``` This PR: - fixes the typo in the train split name - fixes the column alignment of the split tables in the README guide and in all datasets.
closed
https://github.com/huggingface/datasets/pull/3751
2022-02-18T08:18:04
2022-02-18T14:28:52
2022-02-18T14:28:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,142,408,331
3,750
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
## Describe the bug Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results Loading is successful. ## Actual results ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7262410, num_examples=23410, dataset_name='cats_vs_dogs')}] ``` ## Environment info Reproduced on a fresh [Colab notebook](https://colab.research.google.com/drive/13GTvrSJbBGvL2ybDdXCBZwATd6FOkMub?usp=sharing). ## Additional Context Originally reported in https://github.com/huggingface/transformers/issues/15698. cc @mariosasko
closed
https://github.com/huggingface/datasets/issues/3750
2022-02-18T05:46:39
2022-02-18T14:56:11
2022-02-18T14:56:11
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,142,156,678
3,749
Add tqdm arguments
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
closed
https://github.com/huggingface/datasets/pull/3749
2022-02-18T01:34:46
2022-03-08T09:38:48
2022-03-08T09:38:48
{ "login": "penguinwang96825", "id": 28087825, "type": "User" }
[]
true
[]
1,142,128,763
3,748
Add tqdm arguments
In this PR, there are two changes. 1. It is able to show the progress bar by adding the length of the iterator. 2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library.
closed
https://github.com/huggingface/datasets/pull/3748
2022-02-18T00:47:55
2022-02-18T00:59:15
2022-02-18T00:59:15
{ "login": "penguinwang96825", "id": 28087825, "type": "User" }
[]
true
[]
1,141,688,854
3,747
Passing invalid subset should throw an error
## Describe the bug Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('rotten_tomatoes', 'asdfasdfa') ``` ## Expected results This should break, since `'asdfasdfa'` isn't a subset of the `rotten_tomatoes` dataset. ## Actual results This API call silently succeeds.
open
https://github.com/huggingface/datasets/issues/3747
2022-02-17T18:16:11
2022-02-17T18:16:11
null
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,141,612,810
3,746
Use the same seed to shuffle shards and metadata in streaming mode
When shuffling in streaming mode, those two entangled lists are shuffled independently. In this PR I changed this to shuffle the lists of same length with the exact same seed, in order for the files and metadata to still be aligned. ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "metadata_files": [all_metadata[filename] for filename in all_files], } ``` IMO this is important to avoid big but silent issues. Fix https://github.com/huggingface/datasets/issues/3744
closed
https://github.com/huggingface/datasets/pull/3746
2022-02-17T17:06:31
2022-02-23T15:00:59
2022-02-23T15:00:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,141,520,953
3,745
Add mIoU metric
This PR adds the mean Intersection-over-Union metric to the library, useful for tasks like semantic segmentation. It is entirely based on mmseg's [implementation](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/core/evaluation/metrics.py). I've removed any PyTorch dependency, and rely on Numpy only.
closed
https://github.com/huggingface/datasets/pull/3745
2022-02-17T15:52:17
2022-03-08T13:20:26
2022-03-08T13:20:26
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[]
true
[]
1,141,461,165
3,744
Better shards shuffling in streaming mode
Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`: ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "metadata_files": [all_metadata[filename] for filename in all_files], } ``` It happened for Multilingual Spoken Words for example in #3666 However currently **the two lists are shuffled independently** when shuffling the shards in streaming mode. This leads to `_generate_examples` not having the right metadata for each file. To prevent this issue I suggest that we always shuffle lists of the same length the exact same way to avoid such a big but silent issue. cc @polinaeterna
closed
https://github.com/huggingface/datasets/issues/3744
2022-02-17T15:07:21
2022-02-23T15:00:58
2022-02-23T15:00:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,141,176,011
3,743
initial monash time series forecasting repository
null
closed
https://github.com/huggingface/datasets/pull/3743
2022-02-17T10:51:31
2022-03-21T09:54:41
2022-03-21T09:50:16
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,141,174,549
3,742
Fix ValueError message formatting in int2str
Hi! I bumped into this particular `ValueError` during my work (because an instance of `np.int64` was passed instead of regular Python `int`), and so I had to `print(type(values))` myself. Apparently, it's just the missing `f` to make message an f-string. It ain't much for a contribution, but it's honest work. Hope it spares someone else a few seconds in the future 😃
closed
https://github.com/huggingface/datasets/pull/3742
2022-02-17T10:50:08
2022-02-17T15:32:02
2022-02-17T15:32:02
{ "login": "aaakulchyk", "id": 41182803, "type": "User" }
[]
true
[]
1,141,132,649
3,741
Rm sphinx doc
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
closed
https://github.com/huggingface/datasets/pull/3741
2022-02-17T10:11:37
2022-02-17T10:15:17
2022-02-17T10:15:12
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,140,720,739
3,740
Support streaming for pubmed
This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739. Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes: * Change URL prefix from `ftp://` to `https://` * Explicilty `open` the filename and pass the XML contents to `etree.fromstring(xml_str)` The Github diff tool makes it look like the changes are larger than they are, sorry about that. I tested locally and the `pubmed` dataset now works in both normal and streaming modes. There is some overhead at the start of each shard in streaming mode as building the XML tree online is quite slow (each pubmed .xml.gz file is ~20MB), but the overhead gets amortized over all the samples in the shard. On my laptop with a single CPU worker I am able to stream at about ~600 samples/s.
closed
https://github.com/huggingface/datasets/pull/3740
2022-02-17T00:18:22
2022-02-18T14:42:13
2022-02-18T14:42:13
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[]
true
[]
1,140,329,189
3,739
Pubmed dataset does not work in streaming mode
## Describe the bug Trying to use the `pubmed` dataset with `streaming=True` fails. ## Steps to reproduce the bug ```python import datasets pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True) print (next(iter(pubmed_train))) ``` ## Expected results I would expect to see the first training sample from the pubmed dataset. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 367, in __iter__ for key, example in self._iter(): File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 364, in _iter yield from ex_iterable File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ for key, example in self.generate_examples_fn(**self.kwargs): File "/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/9715addf10c42a7877a2149ae0c5f2fddabefc775cd1bd9b03ac3f012b86ce46/pubmed.py", line 373, in _generate_examples tree = etree.parse(filename) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 1202, in parse tree.parse(source, parser) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 584, in parse source = open(source, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'gzip://pubmed21n0001.xml::ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0001.xml.gz' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0 ## Comments The error looks like an issue with `open` vs. `xopen` inside the `xml` package. It looks like it's trying to open the remote source URL, which has been edited with prefix `gzip://...`. Maybe there can be an explicit `xopen` before passing the raw data to `etree`, something like: ```python # Before tree = etree.parse(filename) root = tree.getroot() # After with xopen(filename) as f: data_str = f.read() root = etree.fromstring(data_str) ```
closed
https://github.com/huggingface/datasets/issues/3739
2022-02-16T17:13:37
2022-02-18T14:42:13
2022-02-18T14:42:13
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,164,253
3,738
For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files. In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys: ```python import datasets as ds iterable_dataset = ds.load_dataset("huggingface/transformers-metadata", split="train", streaming=True); rows = list(iterable_dataset.take(100)) rows[0] # {'model_type': 'albert', 'pytorch': True, 'tensorflow': True, 'flax': True, 'processor': 'AutoTokenizer'} rows[99] # {'model_class': 'BartModel', 'pipeline_tag': 'feature-extraction', 'auto_class': 'AutoModel'} ``` In normal mode, an exception is thrown: ```python import datasets as ds dataset = ds.load_dataset("huggingface/transformers-metadata", split="train"); ``` ``` ValueError: Couldn't cast model_class: string pipeline_tag: string auto_class: string to {'model_type': Value(dtype='string', id=None), 'pytorch': Value(dtype='bool', id=None), 'tensorflow': Value(dtype='bool', id=None), 'flax': Value(dtype='bool', id=None), 'processor': Value(dtype='string', id=None)} because column names don't match ```
open
https://github.com/huggingface/datasets/issues/3738
2022-02-16T15:20:57
2022-02-21T14:24:55
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,148,050
3,737
Make RedCaps streamable
Make RedCaps streamable. @lhoestq Using `data/redcaps_v1.0_annotations.zip` as a download URL gives an error locally when running `datasets-cli test` (will investigate this another time)
closed
https://github.com/huggingface/datasets/pull/3737
2022-02-16T15:12:23
2022-02-16T15:28:38
2022-02-16T15:28:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,140,134,483
3,736
Local paths in common voice
Continuation of https://github.com/huggingface/datasets/pull/3664: - pass the `streaming` parameter to _split_generator - update @anton-l's code to use this parameter for `common_voice` - add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming Now the `common_voice` dataset has a local path back in `ds["path"]`, and this field is `None` in streaming mode. cc @patrickvonplaten @anton-l @albertvillanova Fix #3663.
closed
https://github.com/huggingface/datasets/pull/3736
2022-02-16T15:01:29
2022-09-21T14:58:38
2022-02-22T09:13:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,140,087,891
3,735
Performance of `datasets` at scale
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The dataset is a 1.1TB extract from GitHub with 120M code files and is stored as 5000 `.json.gz` files. The goal of the preprocessing is to remove duplicates and filter files based on their stats. While the calculating of the hashes for deduplication and stats for filtering can be parallelized the filtering itself is run with a single process. After processing the files are pushed to the hub. ## Machine The experiment was run on a `m1` machine on GCP with 96 CPU cores and 1.3TB RAM. ## Performance breakdown - Loading the data **3.5h** (_30sec_ from cache) - **1h57min** single core loading (not sure what is going on here, corresponds to second progress bar) - **1h10min** multi core json reading - **20min** remaining time before and after the two main processes mentioned above - Process the data **2h** (_20min_ from cache) - **20min** Getting reading for processing - **40min** Hashing and files stats (96 workers) - **58min** Deduplication filtering (single worker) - Save parquet files **5h** - Saving 1000 parquet files (16 workers) - Push to hub **37min** - **34min** git add - **3min** git push (several hours with `Repository.git_push()`) ## Conclusion It appears that loading and saving the data is the main bottleneck at that scale (**8.5h**) whereas processing (**2h**) and pushing the data to the hub (**0.5h**) is relatively fast. To optimize the performance at this scale it would make sense to consider such an end-to-end example and target the bottlenecks which seem to be loading from and saving to disk. The processing itself seems to run relatively fast. ## Notes - map operation on a 1TB dataset with 96 workers requires >1TB RAM - map operation does not maintain 100% CPU utilization with 96 workers - sometimes when the script crashes all the data files have a corresponding `*.lock` file in the data folder (or multiple e.g. `*.lock.lock` when it happened a several times). This causes the cache **not** to be triggered (which is significant at that scale) - i guess because there are new data files - parallelizing `to_parquet` decreased the saving time from 17h to 5h, however adding more workers at this point had almost no effect. not sure if this is: a) a bug in my parallelization logic, b) i/o limit to load data form disk to memory or c) i/o limit to write from memory to disk. - Using `Repository.git_push()` was much slower than using command line `git-lfs` - 10-20MB/s vs. 300MB/s! The `Dataset.push_to_hub()` function is even slower as it only uploads one file at a time with only a few MB/s, whereas `Repository.git_push()` pushes files in parallel (each at a similar speed). cc @lhoestq @julien-c @LysandreJik @SBrandeis
open
https://github.com/huggingface/datasets/issues/3735
2022-02-16T14:23:32
2024-06-27T01:17:48
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
false
[]
1,140,050,336
3,734
Fix bugs in NewsQA dataset
Fix #3733.
closed
https://github.com/huggingface/datasets/pull/3734
2022-02-16T13:51:28
2022-02-17T07:54:26
2022-02-17T07:54:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,140,011,378
3,733
Bugs in NewsQA dataset
## Describe the bug NewsQA dataset has the following bugs: - the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict - the field `badQuestion` does not appear in `answers` nor `validated_answers` ## Steps to reproduce the bug By inspecting the dataset script we can see that: - the parsing of `validated_answers` is a copy-paste of the one for `answers` - the `badQuestion` field is ignored in the parsing of both `answers` and `validated_answers`
closed
https://github.com/huggingface/datasets/issues/3733
2022-02-16T13:17:37
2022-02-17T07:54:25
2022-02-17T07:54:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,004,022
3,732
Support streaming in size estimation function in `push_to_hub`
This PR adds the streamable version of `os.path.getsize` (`fsspec` can return `None`, so we fall back to `fs.open` to make it more robust) to account for possible streamable paths in the nested `extra_nbytes_visitor` function inside `push_to_hub`.
closed
https://github.com/huggingface/datasets/pull/3732
2022-02-16T13:10:48
2022-02-21T18:18:45
2022-02-21T18:18:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,139,626,362
3,731
Fix Multi-News dataset metadata and card
Fix #3730.
closed
https://github.com/huggingface/datasets/pull/3731
2022-02-16T07:14:57
2022-02-16T08:48:47
2022-02-16T08:48:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,139,545,613
3,730
Checksum Error when loading multi-news dataset
## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("multi_news") ``` ## Expected results Should download and load Multi-News dataset. ## Actual results Throws the following error and cannot load data successfully: ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C'] ``` Could this issue please be looked at? Thanks!
closed
https://github.com/huggingface/datasets/issues/3730
2022-02-16T05:11:08
2022-02-16T20:05:06
2022-02-16T08:48:46
{ "login": "byw2", "id": 60560991, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,139,398,442
3,729
Wrong number of examples when loading a text dataset
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming=False) print(len(datasets['train'])) # 1199649 ``` I also use command line operation to verify it ``` $ wc -l train.txt 1199637 train.txt ``` ## Expected results please fix that issue ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.3 - Platform:windows&linux - Python version:3.7 - PyArrow version:6.0.1
closed
https://github.com/huggingface/datasets/issues/3729
2022-02-16T01:13:31
2022-03-15T16:16:09
2022-03-15T16:16:09
{ "login": "kg-nlp", "id": 58376804, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,139,303,614
3,728
VoxPopuli
## Adding a Dataset - **Name:** VoxPopuli - **Description:** A Large-Scale Multilingual Speech Corpus - **Paper:** https://arxiv.org/pdf/2101.00390.pdf - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** one of the largest (if not the largest) multilingual speech corpus: 400K hours of multilingual unlabeled speech + 17k hours of labeled speech Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). 👀 @kahne @Molugan
closed
https://github.com/huggingface/datasets/issues/3728
2022-02-15T23:04:55
2022-02-16T18:49:12
2022-02-16T18:49:12
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,138,979,732
3,727
Patch all module attributes in its namespace
When patching module attributes, only those defined in its `__all__` variable were considered by default (only falling back to `__dict__` if `__all__` was None). However those are only a subset of all the module attributes in its namespace (`__dict__` variable). This PR fixes the problem of modules that have non-None `__all__` variable, but try to access an attribute present in `__dict__` (and not in `__all__`). For example, `pandas` has attribute `__version__` only present in `__dict__`. - Before version 1.4, pandas `__all__` was None, thus all attributes in `__dict__` were patched - From version 1.4, pandas `__all__` is not None, thus attributes in `__dict__` not present in `__all__` are ignored Fix #3724. CC: @severo @lvwerra
closed
https://github.com/huggingface/datasets/pull/3727
2022-02-15T17:12:27
2022-02-17T17:06:18
2022-02-17T17:06:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,870,362
3,726
Use config pandas version in CSV dataset builder
Fix #3724.
closed
https://github.com/huggingface/datasets/pull/3726
2022-02-15T15:47:49
2022-02-15T16:55:45
2022-02-15T16:55:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,835,625
3,725
Pin pandas to avoid bug in streaming mode
Temporarily pin pandas version to avoid bug in streaming mode (patching no longer works). Related to #3724.
closed
https://github.com/huggingface/datasets/pull/3725
2022-02-15T15:21:00
2022-02-15T15:52:38
2022-02-15T15:52:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,827,681
3,724
Bug while streaming CSV dataset with pandas 1.4
## Describe the bug If we upgrade to pandas `1.4`, the patching of the pandas module is no longer working ``` AttributeError: '_PatchedModuleObj' object has no attribute '__version__' ``` ## Steps to reproduce the bug ``` pip install pandas==1.4 ``` ```python from datasets import load_dataset ds = load_dataset("lvwerra/red-wine", split="train", streaming=True) item = next(iter(ds)) item ```
closed
https://github.com/huggingface/datasets/issues/3724
2022-02-15T15:16:19
2022-02-15T16:55:44
2022-02-15T16:55:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,138,789,493
3,723
Fix flatten of complex feature types
Fix `flatten` for the following feature types: Image/Audio, Translation, and TranslationVariableLanguages. Inspired by `cast`/`table_cast`, I've introduced a `table_flatten` function to handle the Image/Audio types. CC: @SBrandeis Fix #3686.
closed
https://github.com/huggingface/datasets/pull/3723
2022-02-15T14:45:33
2022-03-18T17:32:26
2022-03-18T17:28:14
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,138,770,211
3,722
added electricity load diagram dataset
Initial Electricity Load Diagram time series dataset.
closed
https://github.com/huggingface/datasets/pull/3722
2022-02-15T14:29:29
2022-02-16T18:53:21
2022-02-16T18:48:07
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,137,617,108
3,721
Multi-GPU support for `FaissIndex`
Per #3716 , current implementation does not take into consideration that `faiss` can run on multiple GPUs. In this commit, I provided multi-GPU support for `FaissIndex` by modifying the device management in `IndexableMixin.add_faiss_index` and `FaissIndex.load`. Now users are able to pass in 1. a positive integer (as usual) to use 1 GPU 2. a negative integer `-1` to use all GPUs 3. a list of integers e.g. `[0, 1]` to run only on those GPUs 4. Of course, passing in nothing still runs on CPU. This closes: #3716
closed
https://github.com/huggingface/datasets/pull/3721
2022-02-14T17:26:51
2022-03-07T16:28:57
2022-03-07T16:28:56
{ "login": "rentruewang", "id": 32859905, "type": "User" }
[]
true
[]
1,137,537,080
3,720
Builder Configuration Update Required on Common Voice Dataset
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: https://github.com/huggingface/datasets/blob/master/datasets/common_voice/common_voice.py and Urdu isn't included there. I assume a quick update will fix the issue as Urdu speech is now available at the Common Voice dataset. Am I the one who added this dataset? No
closed
https://github.com/huggingface/datasets/issues/3720
2022-02-14T16:21:41
2024-04-28T18:03:08
2024-04-28T18:03:08
{ "login": "aasem", "id": 12482065, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,137,237,622
3,719
Check if indices values in `Dataset.select` are within bounds
Fix #3707 Instead of reusing `_check_valid_index_key` from `datasets.formatting`, I defined a new function to provide a more meaningful error message.
closed
https://github.com/huggingface/datasets/pull/3719
2022-02-14T12:31:41
2022-02-14T19:19:22
2022-02-14T19:19:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,137,196,388
3,718
Fix Evidence Infer Treatment dataset
This PR: - fixes a bug in the script, by removing an unnamed column with the row index: fix KeyError - fix the metadata JSON, by adding both configurations (1.1 and 2.0): fix ExpectedMoreDownloadedFiles - updates the dataset card Fix #3515.
closed
https://github.com/huggingface/datasets/pull/3718
2022-02-14T11:58:07
2022-02-14T13:21:45
2022-02-14T13:21:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,137,183,015
3,717
wrong condition in `Features ClassLabel encode_example`
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` condition change the result of the condition. For instance, if `example_data` equals 4 and ` self.num_classes` equals 4 too, `example_data < self.num_classes` will give `False` as expected . But if i add the `not - 1` condition, `not -1 <= example_data < self.num_classes` will give `True` and raise an exception. ## Environment info - `datasets` version: 1.18.3 - Python version: 3.8.10 - PyArrow version: 7.00
closed
https://github.com/huggingface/datasets/issues/3717
2022-02-14T11:44:35
2022-02-14T15:09:36
2022-02-14T15:07:43
{ "login": "Tudyx", "id": 56633664, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,136,831,092
3,716
`FaissIndex` to support multiple GPU and `custom_index`
**Is your feature request related to a problem? Please describe.** Currently, because `device` is of the type `int | None`, to leverage `faiss-gpu`'s multi-gpu support, you need to create a `custom_index`. However, if using a `custom_index` created by e.g. `faiss.index_cpu_to_all_gpus`, then `FaissIndex.save` does not work properly because it checks the device id (which is an int, so no multiple GPUs). **Describe the solution you'd like** I would like `FaissIndex` to support multiple GPUs, by passing in a list to `add_faiss_index`. **Describe alternatives you've considered** Alternatively, I would like it to at least provide a warning cause it wasn't the behavior that I expected. **Additional context** Relavent source code here: https://github.com/huggingface/datasets/blob/6ed6ac9448311930557810383d2cfd4fe6aae269/src/datasets/search.py#L340-L349 Device management needs changing to support multiple GPUs, probably by `isinstance` calls. I can provide a PR if you like :) Thanks for reading!
closed
https://github.com/huggingface/datasets/issues/3716
2022-02-14T06:21:43
2022-03-07T16:28:56
2022-03-07T16:28:56
{ "login": "rentruewang", "id": 32859905, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,136,107,879
3,715
Fix bugs in msr_sqa dataset
The last version has many problems, 1) Errors in table load-in. Split by a single comma instead of using pandas is wrong. 2) id reduplicated in _generate_examples function. 3) Missing information of history questions which make it hard to use. I fix it refer to https://github.com/HKUNLP/UnifiedSKG. And we test it to perform normally.
closed
https://github.com/huggingface/datasets/pull/3715
2022-02-13T16:37:30
2022-10-03T09:10:02
2022-10-03T09:08:06
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,136,105,530
3,714
tatoeba_mt: File not found error and key error
## Dataset viewer issue for 'tatoeba_mt' **Link:** https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt My data loader script does not seem to work. The files are part of the local repository but cannot be found. An example where it should work is the subset for "afr-eng". Another problem is that I do not have validation data for all subsets and I don't know how to properly check whether validation exists in the configuration before I try to download it. An example is the subset for "afr-deu". Am I the one who added this dataset ? Yes
closed
https://github.com/huggingface/datasets/issues/3714
2022-02-13T16:35:45
2022-02-13T20:44:04
2022-02-13T20:44:04
{ "login": "jorgtied", "id": 614718, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,135,692,572
3,713
Rm sphinx doc
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
closed
https://github.com/huggingface/datasets/pull/3713
2022-02-13T11:26:31
2022-02-17T10:18:46
2022-02-17T10:12:09
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,134,252,505
3,712
Fix the error of msr_sqa dataset
Fix the error of _load_table_data function in msr_sqa dataset, it is wrong to use comma to split each row.
closed
https://github.com/huggingface/datasets/pull/3712
2022-02-12T16:27:54
2022-02-13T11:21:05
2022-02-13T11:21:05
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[]
true
[]
1,134,050,545
3,711
Fix the error of _load_table_data function in msr_sqa dataset
The _load_table_data function from the last version is wrong, it is wrong to use comma to split each row.
closed
https://github.com/huggingface/datasets/pull/3711
2022-02-12T13:20:53
2022-02-12T13:30:43
2022-02-12T13:30:43
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[]
true
[]
1,133,955,393
3,710
Fix CI code quality issue
Fix CI code quality issue introduced by #3695.
closed
https://github.com/huggingface/datasets/pull/3710
2022-02-12T12:05:39
2022-02-12T12:58:05
2022-02-12T12:58:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,132,997,904
3,709
Set base path to hub url for canonical datasets
This should allow canonical datasets to use relative paths to download data files from the Hub cc @polinaeterna this will be useful if we have audio datasets that are canonical and for which you'd like to host data files
closed
https://github.com/huggingface/datasets/pull/3709
2022-02-11T19:23:20
2022-02-16T14:02:28
2022-02-16T14:02:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,132,968,402
3,708
Loading JSON gets stuck with many workers/threads
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from datasets import load_dataset from multiprocessing import Process from tqdm import tqdm import datasets from transformers import set_seed def run_tasks_in_parallel(tasks, ds_list): for _ in tqdm(range(1000)): print('new batch') running_tasks = [Process(target=task, args=(ds, i)) for i, (task, ds) in enumerate(zip(tasks, ds_list))] for running_task in running_tasks: running_task.start() for running_task in running_tasks: running_task.join() def get_dataset(): dataset_name = 'transformersbook/codeparrot' ds = load_dataset(dataset_name+'-train', split="train", streaming=True) ds = ds.shuffle(buffer_size=1000, seed=1) return iter(ds) def get_next_element(ds, process_id, N=10000): for _ in range(N): _ = next(ds)['content'] print(f'process {process_id} done') return set_seed(1) datasets.utils.logging.set_verbosity_debug() n_processes = 8 tasks = [get_next_element for _ in range(n_processes)] args = [get_dataset() for _ in range(n_processes)] run_tasks_in_parallel(tasks, args) ``` Today I noticed that it can happen when running it on a single process on a machine with many cores without streaming. So just `load_dataset("transformersbook/codeparrot-train")` alone might cause the issue after waiting long enough or trying many times. It's a slightly random process which makes it especially hard to track down. When I encountered it today it had already processed 17GB of data (the size of the cache folder when it got stuck) before getting stuck. Here's my current understanding of the error. As far as I can tell it happens in the following block: https://github.com/huggingface/datasets/blob/be701e9e89ab38022612c7263edc015bc7feaff9/src/datasets/packaged_modules/json/json.py#L119-L139 When the try on line 121 fails and the `block_size` is increased it can happen that it can't read the JSON again and gets stuck indefinitely. A hint that points in that direction is that increasing the `chunksize` argument decreases the chance of getting stuck and vice versa. Maybe it is an issue with a lock on the file that is not properly released. ## Expected results Read a JSON before the end of the universe. ## Actual results Read a JSON not before the end of the universe. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.10 - PyArrow version: 7.0.0 @lhoestq we dicsussed this a while ago. @albertvillanova we discussed this today :)
open
https://github.com/huggingface/datasets/issues/3708
2022-02-11T18:50:48
2023-06-16T11:24:12
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,741,903
3,707
`.select`: unexpected behavior with `indices`
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": ["d", "e", "f"], "label": [4, 5, 6]}) res1 = ds.select([1, 2, 3])['text'] res2 = ds.select([1000])['text'] ``` ## Expected results Both results should throw an `Error`. ## Actual results `res1` will give `['e', 'f', 'd']` `res2` will give `['e']` ## Environment info Bug found from this environment: - `datasets` version: 1.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.7 - PyArrow version: 6.0.1 It was also replicated on `master`.
closed
https://github.com/huggingface/datasets/issues/3707
2022-02-11T15:20:01
2022-02-14T19:19:21
2022-02-14T19:19:21
{ "login": "gabegma", "id": 36087158, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,218,874
3,706
Unable to load dataset 'big_patent'
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\bigPatentData\train.tar.gz doesn't exist ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.18.3 - Platform: Windows - Python version:3.8 - PyArrow version:7.0.0
closed
https://github.com/huggingface/datasets/issues/3706
2022-02-11T09:48:34
2022-02-14T15:26:03
2022-02-14T15:26:03
{ "login": "ankitk2109", "id": 26432753, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,053,226
3,705
Raise informative error when loading a save_to_disk dataset
People recurrently report error when trying to load a dataset (using `load_dataset`) that was previously saved using `save_to_disk`. This PR raises an informative error message telling them they should use `load_from_disk` instead. Close #3700.
closed
https://github.com/huggingface/datasets/pull/3705
2022-02-11T08:21:03
2022-02-11T22:56:40
2022-02-11T22:56:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,132,042,631
3,704
OSCAR-2109 datasets are misaligned and truncated
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the particular (mis)alignment is in various configurations: ```python from datasets import load_dataset dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_fi", split="train", use_auth_token=True) entry = dataset[0] # entry["text"] is from fi_part_3.txt.gz # entry["meta"] is from fi_meta_part_2.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_no", split="train", use_auth_token=True) entry = dataset[900000] # entry["text"] is from no_part_3.txt.gz and contains a blank line # entry["meta"] is from no_meta_part_1.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_mk", split="train", streaming=True, use_auth_token=True) # 9088 texts in the dataset are empty ``` For `deduplicated_fi`, all exported raw texts from the dataset are 17GB rather than 20GB as reported in the data splits overview table. The token count with `wc -w` for the raw texts is 2,067,556,874 rather than the expected 2,357,264,196 from the data splits table. For `deduplicated_no` all exported raw texts contain 624,040,887 rather than the expected 776,354,517 tokens. For `deduplicated_mk` it is 122,236,936 rather than 134,544,934 tokens. I'm not expecting the `wc -w` counts to line up exactly with the data splits table, but for comparison the `wc -w` count for `deduplicated_mk` on the raw texts is 134,545,424. ## Issues * The meta / text files are not paired correctly when loading, so the extracted texts do not have the right offsets, the metadata is not associated with the correct text, and the text files may not be processed to the end or may be processed beyond the end (empty texts). * The line count offset is not reset per file so the texts aren't aligned to the right offsets in any parts beyond the first part, leading to truncation when in effect blank lines are not skipped. * Non-unix newline characters are treated as newlines when reading the text files while the metadata only counts unix newlines for its line offsets, leading to further misalignments between the metadata and the extracted texts, and which also results in truncation. ## Expected results All texts from the OSCAR release are extracted according to the metadata and aligned with the correct metadata. ## Fixes Not necessarily the exact fixes/checks you may want to use (I didn't test all languages or do any cross-platform testing, I'm not sure all the details are compatible with streaming), however to highlight the issues: ```diff diff --git a/OSCAR-2109.py b/OSCAR-2109.py index bbac1076..5eee8de7 100644 --- a/OSCAR-2109.py +++ b/OSCAR-2109.py @@ -20,6 +20,7 @@ import collections import gzip import json +import os import datasets @@ -387,9 +388,20 @@ class Oscar2109(datasets.GeneratorBasedBuilder): with open(checksum_file, encoding="utf-8") as f: data_filenames = [line.split()[1] for line in f if line] data_urls = [self.config.base_data_path + data_filename for data_filename in data_filenames] - text_files = dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")]) - metadata_files = dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")]) + # sort filenames so corresponding parts are aligned + text_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")])) + metadata_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")])) + assert len(text_files) == len(metadata_files) metadata_and_text_files = list(zip(metadata_files, text_files)) + for meta_path, text_path in metadata_and_text_files: + # check that meta/text part numbers are the same + if "part" in os.path.basename(text_path): + assert ( + os.path.basename(text_path).replace(".txt.gz", "").split("_")[-1] + == os.path.basename(meta_path).replace(".jsonl.gz", "").split("_")[-1] + ) + else: + assert len(metadata_and_text_files) == 1 return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"metadata_and_text_files": metadata_and_text_files}), ] @@ -397,10 +409,14 @@ class Oscar2109(datasets.GeneratorBasedBuilder): def _generate_examples(self, metadata_and_text_files): """This function returns the examples in the raw (text) form by iterating on all the files.""" id_ = 0 - offset = 0 for meta_path, text_path in metadata_and_text_files: + # line offsets are per text file + offset = 0 logger.info("generating examples from = %s", text_path) - with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8") as text_f: + # some texts contain non-Unix newlines that should not be + # interpreted as line breaks for the line counts in the metadata + # with readline() + with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8", newline="\n") as text_f: with gzip.open(open(meta_path, "rb"), "rt", encoding="utf-8") as meta_f: for line in meta_f: # read meta @@ -411,7 +427,12 @@ class Oscar2109(datasets.GeneratorBasedBuilder): offset += 1 text_f.readline() # read text - text = "".join([text_f.readline() for _ in range(meta["nb_sentences"])]).rstrip() + text_lines = [text_f.readline() for _ in range(meta["nb_sentences"])] + # all lines contain text (no blank lines or EOF) + assert all(text_lines) + assert "\n" not in text_lines offset += meta["nb_sentences"] + # only strip the trailing newline + text = "".join(text_lines).rstrip("\n") yield id_, {"id": id_, "text": text, "meta": meta} id_ += 1 ``` I've tested this with a number of smaller deduplicated languages with 1-20 parts and the resulting datasets looked correct in terms of word count and size when compared to the data splits table and raw texts, and the text/metadata alignments were correct in all my spot checks. However, there are many many languages I didn't test and I'm not sure that there aren't any texts containing blank lines in the corpus, for instance. For the cases I tested, the assertions related to blank lines and EOF made it easier to verify that the text and metadata were aligned as intended, since there would be little chance of spurious alignments of variable-length texts across so much data.
closed
https://github.com/huggingface/datasets/issues/3704
2022-02-11T08:14:59
2022-03-17T18:01:04
2022-03-16T16:21:28
{ "login": "adrianeboyd", "id": 5794899, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,131,882,772
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 604, in <module> main() File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 481, in main metric = load_metric(path='mymetric/seqeval/seqeval.py') File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 610, in load_metric dataset=False, File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 450, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' **What should I do? Please help me, thank you**
closed
https://github.com/huggingface/datasets/issues/3703
2022-02-11T06:38:42
2023-07-11T09:31:59
2023-07-11T09:31:59
{ "login": "zhangyifei1", "id": 28425091, "type": "User" }
[]
false
[]
1,130,666,707
3,702
Update data URL of lm1b dataset
The http address doesn't work anymore
closed
https://github.com/huggingface/datasets/pull/3702
2022-02-10T18:46:30
2022-09-23T11:52:39
2022-09-23T11:52:39
{ "login": "yazdanbakhsh", "id": 7105134, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,130,498,738
3,701
Pin ElasticSearch
Until we manage to support ES 8.0, I'm setting the version to `<8.0.0` Currently we're getting this error on 8.0: ```python ValueError: Either 'hosts' or 'cloud_id' must be specified ``` When instantiating a `Elasticsearch()` object
closed
https://github.com/huggingface/datasets/pull/3701
2022-02-10T17:15:26
2022-02-10T17:31:13
2022-02-10T17:31:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,130,200,593
3,699
Add dev-only config to Natural Questions dataset
As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded. Fix #413.
closed
https://github.com/huggingface/datasets/pull/3699
2022-02-10T14:42:24
2022-02-11T09:50:22
2022-02-11T09:50:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,129,864,282
3,698
Add finetune-data CodeFill
null
closed
https://github.com/huggingface/datasets/pull/3698
2022-02-10T11:12:51
2022-10-03T09:36:18
2022-10-03T09:36:18
{ "login": "rgismondi", "id": 49989029, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,129,795,724
3,697
Add code-fill datasets for pretraining/finetuning/evaluating
null
closed
https://github.com/huggingface/datasets/pull/3697
2022-02-10T10:31:48
2022-07-06T15:19:58
2022-07-06T15:19:58
{ "login": "rgismondi", "id": 49989029, "type": "User" }
[]
true
[]
1,129,764,534
3,696
Force unique keys in newsqa dataset
Currently, it may raise `DuplicatedKeysError`. Fix #3630.
closed
https://github.com/huggingface/datasets/pull/3696
2022-02-10T10:09:19
2022-02-14T08:37:20
2022-02-14T08:37:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,129,730,148
3,695
Fix ClassLabel to/from dict when passed names_file
Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`. This PR, removes `names_file` as a field of the data class `ClassLabel`. - it is only used at instantiation to generate the `labels` field Fix #3631.
closed
https://github.com/huggingface/datasets/pull/3695
2022-02-10T09:47:10
2022-02-11T23:02:32
2022-02-11T23:02:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,128,554,365
3,693
Standardize to `Example::`
null
closed
https://github.com/huggingface/datasets/pull/3693
2022-02-09T13:37:13
2022-02-17T10:20:55
2022-02-17T10:20:52
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,128,320,004
3,692
Update data URL in pubmed dataset
Fix #3655.
closed
https://github.com/huggingface/datasets/pull/3692
2022-02-09T10:06:21
2022-02-14T14:15:42
2022-02-14T14:15:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,127,629,306
3,691
Upgrade black to version ~=22.0
Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0.
closed
https://github.com/huggingface/datasets/pull/3691
2022-02-08T18:45:19
2022-02-08T19:56:40
2022-02-08T19:56:39
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
true
[]
1,127,493,538
3,690
Update docs to new frontend/UI
### TLDR: Update `datasets` `docs` to the new syntax (markdown and mdx files) & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index)) | Light mode | Dark mode | |-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | <img width="400" alt="Screenshot 2022-02-17 at 14 15 34" src="https://user-images.githubusercontent.com/11827707/154489358-e2fb3708-8d72-4fb6-93f0-51d4880321c0.png"> | <img width="400" alt="Screenshot 2022-02-17 at 14 16 27" src="https://user-images.githubusercontent.com/11827707/154489596-c5a1311b-181c-4341-adb3-d60a7d3abe85.png"> | ## Checklist - [x] update datasets docs to new syntax (should call `doc-builder convert`) (this PR) - [x] discuss `@property` methods frontend https://github.com/huggingface/doc-builder/pull/87 - [x] discuss `inject_arrow_table_documentation` (this PR) https://github.com/huggingface/datasets/pull/3690#discussion_r801847860 - [x] update datasets docs path on moon-landing https://github.com/huggingface/moon-landing/pull/2089 - [x] convert pyarrow docstring from Numpydoc style to groups style https://github.com/huggingface/doc-builder/pull/89(https://stackoverflow.com/a/24385103/6558628) - [x] handle `Raises` section on frontend and doc-builder https://github.com/huggingface/doc-builder/pull/86 - [x] check imgs path (this PR) (nothing to update here) - [x] doc exaples block has to follow format `Examples::` https://github.com/huggingface/datasets/pull/3693 - [x] fix [this docstring](https://github.com/huggingface/datasets/blob/6ed6ac9448311930557810383d2cfd4fe6aae269/src/datasets/arrow_dataset.py#L3339) (causing svelte compilation error) - [x] Delete sphinx related files - [x] Delete sphinx CI - [x] Update docs config in setup.py - [x] add `versions.yml` in doc-build https://github.com/huggingface/doc-build/pull/1 - [x] add `versions.yml` in doc-build-dev https://github.com/huggingface/doc-build-dev/pull/1 - [x] https://github.com/huggingface/moon-landing/pull/2089 - [x] format docstrings for example `datasets.DatasetBuilder.download_and_prepare` args format look wrong - [x] create new github actions. (can probably be in a separate PR) (see the transformers equivalents below) 1. [build_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_dev_documentation.yml) 2. [build_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_documentation.yml) 3. [delete_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/delete_dev_documentation.yml) ## Note to reviewers The number of changed files is a lot (100+) because I've converted all `.rst` files to `.mdx` files & they are compiling fine on the svelte side (also, moved all the imgs to to [doc-imgs repo](https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets)). Moreover, you should just review them on preprod and see if the rendering look fine. _Therefore, I'd suggest to focus on the changed_ **`.py`** and **CI files** (github workflows, etc. you can use [this filter here](https://github.com/huggingface/datasets/pull/3690/files?file-filters%5B%5D=.py&file-filters%5B%5D=.yml&show-deleted-files=true&show-viewed-files=true)) during the review & ignore `.mdx` files. (if there's a bug in `.mdx` files, we can always handle it in a separate PR afterwards).
closed
https://github.com/huggingface/datasets/pull/3690
2022-02-08T16:38:09
2022-03-03T20:04:21
2022-03-03T20:04:20
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,127,422,478
3,689
Fix streaming for servers not supporting HTTP range requests
Some servers do not support HTTP range requests, whereas this is required to stream some file formats (like ZIP). ~~This PR implements a workaround for those cases, by download the files locally in a temporary directory (cleaned up by the OS once the process is finished).~~ This PR raises custom error explaining that streaming is not possible because data host server does not support HTTP range requests. Fix #3677.
closed
https://github.com/huggingface/datasets/pull/3689
2022-02-08T15:41:05
2022-02-10T16:51:25
2022-02-10T16:51:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,127,218,321
3,688
Pyarrow version error
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed with all version of pyarrow execpt `4.0.0` but still get the same error. ## Steps to reproduce the bug ```python import datasets ``` ## Expected results A clear and concise description of the expected results. ## Actual results AttributeError Traceback (most recent call last) <ipython-input-19-652e886d387f> in <module> ----> 1 import datasets ~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module> 26 27 ---> 28 if _version.parse(pyarrow.__version__).major < 3: 29 raise ImportWarning( 30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n" AttributeError: 'Version' object has no attribute 'major' ## Environment info Traceback (most recent call last): File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module> File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module> if _version.parse(pyarrow.__version__).major < 3: AttributeError: 'Version' object has no attribute 'major' - `datasets` version: - Platform: Linux(Ubuntu) and Windows: conda on the both - Python version: 3.7 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/3688
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
{ "login": "Zaker237", "id": 49993443, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,127,154,766
3,687
Can't get the text data when calling to_tf_dataset
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") dataset = load_dataset("sst") train_dataset = dataset["train"].to_tf_dataset(columns=['sentence'], label_cols="label", shuffle=True, batch_size=8,collate_fn=data_collator) ``` However, this only gets me the labels; the text--the most important part--is missing: ``` for s in train_dataset.take(1): print(s) #prints something like: ({}, <tf.Tensor: shape=(8,), ...>) ``` As you can see, it only returns the label part, not the data, as indicated by the empty dictionary, `{}`. So far, I've played with various settings of the method arguments, but to no avail; I do not want to perform any text processing at this time. On my quest to achieve what I want ( a `tf.data.Dataset`), I've consulted these resources: [https://www.philschmid.de/huggingface-transformers-keras-tf](https://www.philschmid.de/huggingface-transformers-keras-tf) [https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow](https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow) I was surprised to not find more extensive examples on how to transform a Hugginface dataset to one compatible with TensorFlow. If you could point me to where I am going wrong, please do so. Thanks in advance for your support. --- Edit: In the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset), I found the following description: _In general, only columns that the model can use as input should be included here (numeric data only)._ Does this imply that no textual, i.e., `string` data can be loaded?
closed
https://github.com/huggingface/datasets/issues/3687
2022-02-08T11:52:10
2023-01-19T14:55:18
2023-01-19T14:55:18
{ "login": "phrasenmaeher", "id": 82086367, "type": "User" }
[]
false
[]
1,127,137,290
3,686
`Translation` features cannot be `flatten`ed
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]") print(dataset.features) # {'translation': Translation(languages=['en', 'fr'], id=None)} print(dataset[0]) # {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}} dataset.flatten() ``` ## Expected results `dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")` ```python dataset[0] # {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' } dataset.features # {'translation.en': Value("string"), 'translation.fr': Value("string")} ``` ## Actual results ```python In [31]: dset.flatten() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-31-bb88eb5276ee> in <module> ----> 1 dset.flatten() [...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms [...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth) 1294 break 1295 dataset.info.features = self.features.flatten(max_depth=max_depth) -> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features) 1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.') 1298 dataset._fingerprint = new_fingerprint [...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) [...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) KeyError: 'translation.en' ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3686
2022-02-08T11:33:48
2022-03-18T17:28:13
2022-03-18T17:28:13
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,126,240,444
3,685
Add support for `Audio` and `Image` feature in `push_to_hub`
Add support for the `Audio` and the `Image` feature in `push_to_hub`. The idea is to remove local path information and store file content under "bytes" in the Arrow table before the push. My initial approach (https://github.com/huggingface/datasets/commit/34c652afeff9686b6b8bf4e703c84d2205d670aa) was to use a map transform similar to [`decode_nested_example`](https://github.com/huggingface/datasets/blob/5e0f6068741464f833ff1802e24ecc2064aaea9f/src/datasets/features/features.py#L1023-L1056) while having decoding turned off, but I wasn't satisfied with the code quality, so I ended up using the `temporary_assignment` decorator to override `cast_storage`, which allows me to directly modify the underlying storage (the final op is similar to `Dataset.cast`) and results in a much simpler code. Additionally, I added the `allow_cast` flag that can disable this behavior in the situations where it's not needed (e.g. the dataset is already in the correct format for the Hub, etc.) EDIT: `allow_cast` renamed to `embed_external_files`
closed
https://github.com/huggingface/datasets/pull/3685
2022-02-07T16:47:16
2022-02-14T18:14:57
2022-02-14T18:04:58
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,125,133,664
3,684
[fix]: iwslt2017 download urls
Fixes #2076.
closed
https://github.com/huggingface/datasets/pull/3684
2022-02-06T07:56:55
2022-09-22T16:20:19
2022-09-22T16:20:18
{ "login": "msarmi9", "id": 48395294, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,124,458,371
3,683
added told-br (brazilian hate speech) dataset
Hey, Adding ToLD-Br. Feel free to ask for modifications. Thanks!!
closed
https://github.com/huggingface/datasets/pull/3683
2022-02-04T17:44:32
2022-02-07T21:14:52
2022-02-07T21:14:52
{ "login": "joaoaleite", "id": 26556320, "type": "User" }
[]
true
[]
1,124,434,330
3,682
adding told-br for toxic/abusive hatespeech detection
Hey, I'm adding our dataset from our paper published at AACL 2020. Feel free to ask for modifications. Thanks!
closed
https://github.com/huggingface/datasets/pull/3682
2022-02-04T17:18:29
2022-02-07T03:23:24
2022-02-04T17:36:40
{ "login": "joaoaleite", "id": 26556320, "type": "User" }
[]
true
[]
1,124,237,458
3,681
Fix TestCommand to move dataset_infos instead of copying
Why do we copy instead of moving the file? CC: @lhoestq @lvwerra
closed
https://github.com/huggingface/datasets/pull/3681
2022-02-04T14:01:52
2023-09-24T10:00:11
2023-09-24T09:59:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,124,213,416
3,680
Fix TestCommand to copy dataset_infos to local dir with only data files
Currently this case is missed. CC: @lvwerra
closed
https://github.com/huggingface/datasets/pull/3680
2022-02-04T13:36:46
2022-02-08T10:32:55
2022-02-08T10:32:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,124,062,133
3,679
Download datasets from a private hub
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
closed
https://github.com/huggingface/datasets/issues/3679
2022-02-04T10:49:06
2022-02-22T11:08:07
2022-02-22T11:08:07
{ "login": "juliensimon", "id": 3436143, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "private-hub", "color": "A929D8" } ]
false
[]