id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
โŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
โŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
962,994,198
2,766
fix typo (ShuffingConfig -> ShufflingConfig)
pretty straightforward, it should be Shuffling instead of Shuffing
closed
https://github.com/huggingface/datasets/pull/2766
2021-08-06T19:31:40
2021-08-10T14:17:03
2021-08-10T14:17:02
{ "login": "daleevans", "id": 4944007, "type": "User" }
[]
true
[]
962,861,395
2,765
BERTScore Error
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2765
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
{ "login": "gagan3012", "id": 49101362, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
962,554,799
2,764
Add DER metric for SUPERB speaker diarization task
null
closed
https://github.com/huggingface/datasets/pull/2764
2021-08-06T09:12:36
2023-07-11T09:35:23
2023-07-11T09:35:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "transfer-to-evaluate", "color": "E3165C" } ]
true
[]
961,895,523
2,763
English wikipedia datasets is not clean
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.en') print(w['train'][0]['text']) ``` > 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiฤnjฤซn Shรญ Jiฤ Dร  Yuร n, ๅคฉๆดฅ็Ÿณๅฎถๅคง้™ข) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'** ## Expected results I expect no junk in the data. ## Actual results Specify the actual results or traceback. ## Environment info - `datasets` version: 1.10.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2763
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
{ "login": "lucadiliello", "id": 23355969, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
961,652,046
2,762
Add RVL-CDIP dataset
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. - **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/ - **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/ - **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2762
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
961,568,287
2,761
Error loading C4 realnewslike dataset
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2761
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
{ "login": "danshirron", "id": 32061512, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
961,372,667
2,760
Add Nuswide dataset
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2760
2021-08-05T03:00:41
2021-12-08T12:06:23
null
{ "login": "shivangibithel", "id": 19774925, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
960,206,575
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
closed
https://github.com/huggingface/datasets/pull/2758
2021-08-04T10:19:55
2021-08-04T11:36:30
2021-08-04T11:36:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,984,081
2,757
Unexpected type after `concatenate_datasets`
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2757
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
{ "login": "JulesBelveze", "id": 32683010, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
959,255,646
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2756
2021-08-03T15:48:59
2021-08-04T09:43:25
2021-08-04T09:43:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,115,888
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2755
2021-08-03T13:25:44
2021-08-04T09:06:54
2021-08-04T09:06:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,105,577
2,754
Generate metadata JSON for telugu_books dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2754
2021-08-03T13:14:52
2021-08-04T08:49:02
2021-08-04T08:49:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,036,995
2,753
Generate metadata JSON for reclor dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2753
2021-08-03T11:52:29
2021-08-04T08:07:15
2021-08-04T08:07:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,023,608
2,752
Generate metadata JSON for lm1b dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2752
2021-08-03T11:34:56
2021-08-04T06:40:40
2021-08-04T06:40:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
959,021,262
2,751
Update metadata for wikihow dataset
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
closed
https://github.com/huggingface/datasets/pull/2751
2021-08-03T11:31:57
2021-08-03T15:52:09
2021-08-03T15:52:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
958,984,730
2,750
Second concatenation of datasets produces errors
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
closed
https://github.com/huggingface/datasets/issues/2750
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
{ "login": "Aktsvigun", "id": 36672861, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
958,968,748
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2749
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
958,889,041
2,748
Generate metadata JSON for wikihow dataset
Related to #2743.
closed
https://github.com/huggingface/datasets/pull/2748
2021-08-03T08:55:40
2021-08-03T10:17:51
2021-08-03T10:17:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
958,867,627
2,747
add multi-proc in `to_json`
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a single run) v1- ~225 seconds for converting whole dataset to json v2- ~200 seconds for converting whole dataset to json 2. Dataset name: `lama` - 1.3M samples (all samples were used, reporting this for 2 runs) v1- ~26 seconds for converting whole dataset to json v2- ~23.6 seconds for converting whole dataset to json I think it's safe to say that v2 is 10% faster as compared to v1. Timings may improve further with better configuration. The only bottleneck I feel is writing to file from the output list. If we can improve that aspect then timings may improve further. Let me know if any changes/improvements can be done in this @stas00, @lhoestq, @albertvillanova. @lhoestq even suggested to extend this work with other export methods as well like `csv` or `parquet`.
closed
https://github.com/huggingface/datasets/pull/2747
2021-08-03T08:30:13
2021-10-19T18:24:21
2021-09-13T13:56:37
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
958,551,619
2,746
Cannot load `few-nerd` dataset
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
closed
https://github.com/huggingface/datasets/issues/2746
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
{ "login": "Mehrad0711", "id": 28717374, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
958,269,579
2,745
added semeval18_emotion_classification dataset
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ``` Both commands ran successfully. I couldn't create the dummy data (the files are tsvs but have .txt ending, maybe that's the problem?) and therefore the test on the dummy data fails, maybe someone can help here. I also formatted the code: ``` black --line-length 119 --target-version py36 datasets/semeval18_emotion_classification/ isort datasets/semeval18_emotion_classification/ flake8 datasets/semeval18_emotion_classification/ ``` That's the publication for reference: Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1โ€“17. https://doi.org/10.18653/v1/S18-1001
closed
https://github.com/huggingface/datasets/pull/2745
2021-08-02T15:39:55
2021-10-29T09:22:05
2021-09-21T09:48:35
{ "login": "maxpel", "id": 31095360, "type": "User" }
[]
true
[]
958,146,637
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
Close #2743.
closed
https://github.com/huggingface/datasets/pull/2744
2021-08-02T13:27:53
2021-08-03T09:25:34
2021-08-03T09:25:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
958,119,251
2,743
Dataset JSON is incorrect
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
closed
https://github.com/huggingface/datasets/issues/2743
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
958,114,064
2,742
Improve detection of streamable file types
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
closed
https://github.com/huggingface/datasets/issues/2742
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
957,979,559
2,741
Add Hypersim dataset
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2741
2021-08-02T10:06:50
2021-12-08T12:06:51
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
957,911,035
2,740
Update release instructions
Update release instructions.
closed
https://github.com/huggingface/datasets/pull/2740
2021-08-02T08:46:00
2021-08-02T14:39:56
2021-08-02T14:39:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
957,751,260
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called). Close: #2737.
closed
https://github.com/huggingface/datasets/pull/2739
2021-08-02T05:09:05
2021-08-03T04:23:37
2021-08-03T04:23:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
957,517,746
2,738
Sunbird AI Ugandan low resource language dataset
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
closed
https://github.com/huggingface/datasets/pull/2738
2021-08-01T15:18:00
2022-10-03T09:37:30
2022-10-03T09:37:30
{ "login": "ak3ra", "id": 12105163, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
957,124,881
2,737
SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2737
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
{ "login": "devrimcavusoglu", "id": 46989091, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
956,895,199
2,736
Add Microsoft Building Footprints dataset
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
open
https://github.com/huggingface/datasets/issues/2736
2021-07-30T16:17:08
2021-12-08T12:09:03
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
956,889,365
2,735
Add Open Buildings dataset
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
open
https://github.com/huggingface/datasets/issues/2735
2021-07-30T16:08:39
2021-07-31T05:01:25
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
956,844,874
2,734
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2734
2021-07-30T15:22:51
2021-07-30T15:47:58
2021-07-30T15:47:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
956,725,476
2,733
Add missing parquet known extension
This code was failing because the parquet extension wasn't recognized: ```python from datasets import load_dataset base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" data_files = {"train": base_url + "wikipedia-train.parquet"} wiki = load_dataset("parquet", data_files=data_files, split="train", streaming=True) ``` It raises ```python NotImplementedError: Extraction protocol for file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/wikipedia-train.parquet is not implemented yet ``` I added `parquet` to the list of known extensions EDIT: added pickle, conllu, xml extensions as well
closed
https://github.com/huggingface/datasets/pull/2733
2021-07-30T13:01:20
2021-07-30T13:24:31
2021-07-30T13:24:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
956,676,360
2,732
Updated TTC4900 Dataset
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
closed
https://github.com/huggingface/datasets/pull/2732
2021-07-30T11:52:14
2021-07-30T16:00:51
2021-07-30T15:58:14
{ "login": "yavuzKomecoglu", "id": 5150963, "type": "User" }
[]
true
[]
956,087,452
2,731
Adding to_tf_dataset method
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the whole dataset in memory!), correct shuffling, variable-length batches to reduce compute, and correct support for unusual padding. It achieves that by calling the tokenizer `pad` method in the middle of a TF compute graph via a very hacky call to `tf.py_function`, which is heretical but seems to work. A number of issues need to be resolved before it's ready to merge, though: 1) Is a MixIn the right way to do this? Do other classes besides `arrow_dataset.Dataset` need this method too? 2) Needs an argument to support constant-length batches for TPU training - this is easy to add and I'll do it soon. 3) Needs the user to supply the list of columns to drop from the arrow `Dataset`. Is there some automatic way to get the columns we want, or see which columns were added by the tokenizer? 4) Assumes the label column is always present and always called "label" - this is probably not great, but I'm not sure what the 'correct' thing to do here is.
closed
https://github.com/huggingface/datasets/pull/2731
2021-07-29T18:10:25
2021-09-16T13:50:54
2021-09-16T13:50:54
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
955,987,834
2,730
Update CommonVoice with new release
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2730
2021-07-29T15:59:59
2021-08-07T16:19:19
null
{ "login": "yjernite", "id": 10469459, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
955,920,489
2,729
Fix IndexError while loading Arabic Billion Words dataset
Catch `IndexError` and ignore that record. Close #2727.
closed
https://github.com/huggingface/datasets/pull/2729
2021-07-29T14:47:02
2021-07-30T13:03:55
2021-07-30T13:03:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
true
[]
955,892,970
2,728
Concurrent use of same dataset (already downloaded)
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" "bert-large-cased" "roberta-large" "albert-base-v1" "albert-large-v1"; do for TASK_NAME in "mrpc" "rte" 'imdb' "paws" "mnli"; do export OUTPUT_DIR=${MODEL}_${TASK_NAME} sbatch --job-name=${OUTPUT_DIR} \ --gres=gpu:1 \ --no-requeue \ --cpus-per-task=10 \ --hint=nomultithread \ --time=1:00:00 \ --output=jobinfo/${OUTPUT_DIR}_%j.out \ --error=jobinfo/${OUTPUT_DIR}_%j.err \ --qos=qos_gpu-t4 \ --wrap="module purge; module load pytorch-gpu/py3/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=/gpfswork/rech/toto/transformers_models/$MODEL" done done ```python # Sample code to reproduce the bug dataset_train = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter))) dataset_val = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter, args.filter + 5000))) dataset_test = load_dataset('imdb', split='test', download_mode="reuse_cache_if_exists") dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True) ``` ## Expected results I believe I am doing something wrong with the objects. ## Actual results Traceback (most recent call last): File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 983, in _prepare_split check_duplicates=True, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/arrow_writer.py", line 192, in __init__ self.stream = pa.OSFile(self._path, "wb") File "pyarrow/io.pxi", line 829, in pyarrow.lib.OSFile.__cinit__ File "pyarrow/io.pxi", line 844, in pyarrow.lib.OSFile._open_writable File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status FileNotFoundError: [Errno 2] Failed to open local file '/gpfswork/rech/tts/unm25jp/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "labeled_final", split='train', download_mode="reuse_cache_if_exists") File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 658, in _download_and_prepare + str(e) OSError: Cannot find data file. Original error: [Errno 2] Failed to open local file '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.8.0 - Platform: linux (jeanzay) - Python version: pyarrow==2.0.0 - PyArrow version: 3.7.8
open
https://github.com/huggingface/datasets/issues/2728
2021-07-29T14:18:38
2021-08-02T07:25:57
null
{ "login": "PierreColombo", "id": 22492839, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
955,812,149
2,727
Error in loading the Arabic Billion Words Corpus
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results The datasets load succefully. ## Actual results ```python _extract_tags(self, sample, tag) 139 if len(out) > 0: 140 break --> 141 return out[0] 142 143 def _clean_text(self, text): IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.7.11 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2727
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
955,674,388
2,726
Typo fix `tokenize_exemple`
There is a small typo in the main README.md
closed
https://github.com/huggingface/datasets/pull/2726
2021-07-29T10:03:37
2021-07-29T12:00:25
2021-07-29T12:00:25
{ "login": "shabie", "id": 30535146, "type": "User" }
[]
true
[]
955,020,776
2,725
Pass use_auth_token to request_etags
Fix #2724.
closed
https://github.com/huggingface/datasets/pull/2725
2021-07-28T16:13:29
2021-07-28T16:38:02
2021-07-28T16:38:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
954,919,607
2,724
404 Error when loading remote data files from private repo
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl ``` ## Expected results Load dataset. ## Actual results 404 Error.
closed
https://github.com/huggingface/datasets/issues/2724
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
954,864,104
2,723
Fix en subset by modifying dataset_info with correct validation infos
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "validation", "num_bytes": 825767266, "num_examples": 364608, "dataset_name": "c4"}` There are still issues with validation with other subsets, but I can't download all the files, unzip to check for the correct number of bytes. (If you have a fast way to obtain those values for other subsets, I can do this in this PR ... otherwise I can't spend those resources)
closed
https://github.com/huggingface/datasets/pull/2723
2021-07-28T13:36:19
2021-07-28T15:22:23
2021-07-28T15:22:23
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
954,446,053
2,722
Missing cache file
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json'`
closed
https://github.com/huggingface/datasets/issues/2722
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
{ "login": "PosoSAgapo", "id": 33200481, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
954,238,230
2,721
Deal with the bad check in test_load.py
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with: ```python m_paths = re.findall(r"\S*_dummy/_dummy.py\b", str(exc_info.value)) # on Linux this will match an URL as well as a local_path due to different os.sep, so take the last element (an URL always comes last in the list) assert len(m_paths) > 0 and is_remote_url(m_paths[-1]) # is_remote_url comes from datasets.utils.file_utils ``` @lhoestq Let me know which one of these two approaches (delete or replace) do you prefer?
closed
https://github.com/huggingface/datasets/pull/2721
2021-07-27T20:23:23
2021-07-28T09:58:34
2021-07-28T08:53:18
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
954,024,426
2,720
fix: ๐Ÿ› fix two typos
closed
https://github.com/huggingface/datasets/pull/2720
2021-07-27T15:50:17
2021-07-27T18:38:17
2021-07-27T18:38:16
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
953,932,416
2,719
Use ETag in streaming mode to detect resource updates
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd like** Take the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache. **Describe alternatives you've considered** None
open
https://github.com/huggingface/datasets/issues/2719
2021-07-27T14:17:09
2021-10-22T09:36:08
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
953,360,663
2,718
New documentation structure
Organize Datasets documentation into four documentation types to improve clarity and discoverability of content. **Content to add in the very short term (feel free to add anything I'm missing):** - A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful. - Explain why you would want to disable or override verifications when loading a dataset. - If possible, include a code sample of when the number of elements in the field of an output dictionary arenโ€™t the same as the other fields in the output dictionary (taken from the [note](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset) here).
closed
https://github.com/huggingface/datasets/pull/2718
2021-07-26T23:15:13
2021-09-13T17:20:53
2021-09-13T17:20:52
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
952,979,976
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
closed
https://github.com/huggingface/datasets/pull/2717
2021-07-26T14:42:22
2021-07-26T18:04:14
2021-07-26T16:30:06
{ "login": "amankhandelia", "id": 7098967, "type": "User" }
[]
true
[]
952,902,778
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`. To remedy the problem we can change this line to `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
closed
https://github.com/huggingface/datasets/issues/2716
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
{ "login": "amankhandelia", "id": 7098967, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
952,845,229
2,715
Update PAN-X data URL in XTREME dataset
Related to #2710, #2691.
closed
https://github.com/huggingface/datasets/pull/2715
2021-07-26T12:21:17
2021-07-26T13:27:59
2021-07-26T13:27:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
952,580,820
2,714
add more precise information for size
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
open
https://github.com/huggingface/datasets/issues/2714
2021-07-26T07:11:03
2021-07-26T09:16:25
null
{ "login": "pennyl67", "id": 1493902, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
952,515,256
2,713
Enumerate all ner_tags values in WNUT 17 dataset
This PR does: - Enumerate all ner_tags in dataset card Data Fields section - Add all metadata tags to dataset card Close #2709.
closed
https://github.com/huggingface/datasets/pull/2713
2021-07-26T05:22:16
2021-07-26T09:30:55
2021-07-26T09:30:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
951,723,326
2,710
Update WikiANN data URL
WikiANN data source URL is no longer accessible: 404 error from Dropbox. We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card. Close #2691.
closed
https://github.com/huggingface/datasets/pull/2710
2021-07-23T16:29:21
2021-07-26T09:34:23
2021-07-26T09:34:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
951,534,757
2,709
Missing documentation for wnut_17 (ner_tags)
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` I trained a model with the data and it gives me 13 classes: ``` "id2label": { "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12 } "label2id": { "0": 0, "1": 1, "10": 10, "11": 11, "12": 12, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9 } ``` The paper (https://www.aclweb.org/anthology/W17-4418.pdf) explains those 6 categories, but the ordering does not match: ``` 1. person 2. location (including GPE, facility) 3. corporation 4. product (tangible goods, or well-defined services) 5. creative-work (song, movie, book and so on) 6. group (subsuming music band, sports team, and non-corporate organisations) ``` I would be very helpful for me, if somebody could clarify the model ouputs and explain the "B-" and "I-" prefixes to me. Really great work with that and the other packages, I couldn't believe that training the model with that data was basically a one-liner!
closed
https://github.com/huggingface/datasets/issues/2709
2021-07-23T12:25:32
2021-07-26T09:30:55
2021-07-26T09:30:55
{ "login": "maxpel", "id": 31095360, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
951,092,660
2,708
QASC: incomplete training set
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instances)}") for x in instances: print(json.dumps(x)) load_instances('test') load_instances('validation') load_instances('train') ``` ## results For test and validation, we can see the examples in the output (which is good!): ``` split: test - size: 920 {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Anthax", "under water", "uterus", "wombs", "two", "moles", "live", "embryo"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo", "id": "3C44YUNSI1OBFBB8D36GODNOZN9DPA", "question": "What type of birth do therian mammals have?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Corvidae", "arthropods", "birds", "backbones", "keratin", "Jurassic", "front paws", "Parakeets."]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.", "id": "3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ", "question": "By what time had mouse-sized viviparous mammals evolved?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Reduced friction", "causes infection", "vital to a good life", "prevents water loss", "camouflage from consumers", "Protection against predators", "spur the growth of the plant", "a smooth surface"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface", "id": "3QRYMNZ7FYGITFVSJET3PS0F4S0NT9", "question": "What does a plant's skin do?"} ... ``` However, only a few instances are loaded for the training split, which is not correct. ## Environment info - `datasets` version: '1.10.2' - Platform: MaxOS - Python version:3.7 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2708
2021-07-22T21:59:44
2021-07-23T13:30:07
2021-07-23T13:30:07
{ "login": "danyaljj", "id": 2441454, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
950,812,945
2,707
404 Not Found Error when loading LAMA dataset
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/lama/lama.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/lama/lama.py`
closed
https://github.com/huggingface/datasets/issues/2707
2021-07-22T15:52:33
2021-07-26T14:29:07
2021-07-26T14:29:07
{ "login": "dwil2444", "id": 26467159, "type": "User" }
[]
false
[]
950,606,561
2,706
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2706
2021-07-22T12:29:29
2021-07-22T12:43:00
2021-07-22T12:43:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
950,488,583
2,705
404 not found error on loading WIKIANN dataset
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Actual results FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2705
2021-07-22T09:55:50
2021-07-23T08:07:32
2021-07-23T08:07:32
{ "login": "ronbutan", "id": 39296659, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
950,483,980
2,704
Fix pick default config name message
The error message to tell which config name to load is not displayed. This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659 I fixed that by making the config kwargs empty by default, even if default parameters are passed Fix https://github.com/huggingface/datasets/issues/2703
closed
https://github.com/huggingface/datasets/pull/2704
2021-07-22T09:49:43
2021-07-22T10:02:41
2021-07-22T10:02:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
950,482,284
2,703
Bad message when config name is missing
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'BuilderConfig' object has no attribute 'text_features' ``` instead of ```python ValueError: Config name is missing. Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax'] Example of usage: `load_dataset('glue', 'cola')` ```
closed
https://github.com/huggingface/datasets/issues/2703
2021-07-22T09:47:23
2021-07-22T10:02:40
2021-07-22T10:02:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
950,448,159
2,702
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2702
2021-07-22T09:04:39
2021-07-22T09:17:39
2021-07-22T09:17:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
950,422,403
2,701
Fix download_mode docstrings
Fix `download_mode` docstrings.
closed
https://github.com/huggingface/datasets/pull/2701
2021-07-22T08:30:25
2021-07-22T09:33:31
2021-07-22T09:33:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
950,276,325
2,700
from datasets import Dataset is failing
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: latest version as of 07/21/2021 - Platform: Google Colab - Python version: 3.7 - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2700
2021-07-22T03:51:23
2021-07-22T07:23:45
2021-07-22T07:09:07
{ "login": "kswamy15", "id": 5582286, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
950,221,226
2,699
cannot combine splits merging and streaming?
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_dataset('mc4','iw',split='train',streaming=True)` `dataset = datasets.load_dataset('mc4','iw',split='validation',streaming=True)` i could not find a reference to this in the documentation and the error message is confusing. also would be nice to allow streaming for the merged splits
open
https://github.com/huggingface/datasets/issues/2699
2021-07-22T01:13:25
2024-04-08T13:26:46
null
{ "login": "eyaler", "id": 4436747, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
950,159,867
2,698
Ignore empty batch when writing
This prevents an schema update with unknown column types, as reported in #2644. This is my first attempt at fixing the issue. I tested the following: - First batch returned by a batched map operation is empty. - An intermediate batch is empty. - `python -m unittest tests.test_arrow_writer` passes. However, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable.
closed
https://github.com/huggingface/datasets/pull/2698
2021-07-21T22:35:30
2021-07-26T14:56:03
2021-07-26T13:25:26
{ "login": "pcuenca", "id": 1177582, "type": "User" }
[]
true
[]
950,021,623
2,697
Fix import on Colab
Fix #2695, fix #2700.
closed
https://github.com/huggingface/datasets/pull/2697
2021-07-21T19:03:38
2021-07-22T07:09:08
2021-07-22T07:09:07
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
949,901,726
2,696
Add support for disable_progress_bar on Windows
This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would not work on Windows.
closed
https://github.com/huggingface/datasets/pull/2696
2021-07-21T16:34:53
2021-07-26T13:31:14
2021-07-26T09:38:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
949,864,823
2,695
Cannot import load_dataset on Colab
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install datasets from datasets import load_dataset ``` ## Expected results Works without error ## Actual results Specify the actual results or traceback. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-8cc7de4c69eb> in <module>() ----> 1 from datasets import load_dataset, load_metric, Metric, MetricInfo, Features, Value 2 from sklearn.metrics import mean_squared_error /usr/local/lib/python3.7/dist-packages/datasets/__init__.py in <module>() 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in <module>() 40 from tqdm.auto import tqdm 41 ---> 42 from datasets.tasks.text_classification import TextClassification 43 44 from . import config, utils /usr/local/lib/python3.7/dist-packages/datasets/tasks/__init__.py in <module>() 1 from typing import Optional 2 ----> 3 from ..utils.logging import get_logger 4 from .automatic_speech_recognition import AutomaticSpeechRecognition 5 from .base import TaskTemplate /usr/local/lib/python3.7/dist-packages/datasets/utils/__init__.py in <module>() 19 20 from . import logging ---> 21 from .download_manager import DownloadManager, GenerateMode 22 from .file_utils import DownloadConfig, cached_path, hf_bucket_url, is_remote_url, temp_seed 23 from .mock_download_manager import MockDownloadManager /usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py in <module>() 24 25 from .. import config ---> 26 from .file_utils import ( 27 DownloadConfig, 28 cached_path, /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.0 - Platform: Colab - Python version: 3.7.11 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2695
2021-07-21T15:52:51
2021-07-22T07:26:25
2021-07-22T07:09:07
{ "login": "bayartsogt-ya", "id": 43239645, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
949,844,722
2,694
fix: ๐Ÿ› change string format to allow copy/paste to work in bash
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
closed
https://github.com/huggingface/datasets/pull/2694
2021-07-21T15:30:40
2021-07-22T10:41:47
2021-07-22T10:41:47
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
949,797,014
2,693
Fix OSCAR Esperanto
The Esperanto part (original) of OSCAR has the wrong number of examples: ```python from datasets import load_dataset raw_datasets = load_dataset("oscar", "unshuffled_original_eo") ``` raises ```python NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=314064514, num_examples=121168, dataset_name='oscar')}] ``` I updated the number of expected examples in dataset_infos.json cc @sgugger
closed
https://github.com/huggingface/datasets/pull/2693
2021-07-21T14:43:50
2021-07-21T14:53:52
2021-07-21T14:53:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
949,765,484
2,692
Update BibTeX entry
Update BibTeX entry
closed
https://github.com/huggingface/datasets/pull/2692
2021-07-21T14:23:35
2021-07-21T15:31:41
2021-07-21T15:31:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
949,758,379
2,691
xtreme / pan-x cannot be downloaded
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ``` ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2691
2021-07-21T14:18:05
2021-07-26T09:34:22
2021-07-26T09:34:22
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
949,574,500
2,690
Docs details
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + a one-liner that installs all the requirements / alternatively a requirements.txt file) - "If youโ€™d like to play with the examples, you must install it from source." in https://huggingface.co/docs/datasets/installation.html: it's not clear to me what this means (what are these "examples"?) - in https://huggingface.co/docs/datasets/loading_datasets.html: "or AWS bucket if itโ€™s not already stored in the library". It's the only place in the doc (aside from the docstring https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the "AWS bucket" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https://s3.amazonaws.com/datasets.huggingface.co and/or https://huggingface.co/docs/datasets/filesystems.html. - example in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files is obsoleted by https://github.com/huggingface/datasets/pull/2326. Also: see https://github.com/huggingface/datasets/issues/2691 for a bug on this specific dataset. - in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files the doc says "After youโ€™ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:", but the following example does not show how to use `data_dir` - in https://huggingface.co/docs/datasets/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code/functions/classes... and I would want a lot more links inside the doc pointing to the API entries. - in the API reference (docstrings) I would prefer "SOURCE" to link to github instead of a copy of the code inside the docs site (eg. https://github.com/huggingface/datasets/blob/master/src/datasets/load.py#L711 instead of https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset) - it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https://github.com/huggingface/datasets/search?q=disable_progress_bar), see https://huggingface.co/docs/datasets/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https://huggingface.co/docs/datasets/package_reference/logging_methods.html) - in https://huggingface.co/docs/datasets/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, "The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:", maybe link to https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON and give it a name ("line-delimited JSON"? "JSON Lines" as in https://huggingface.co/docs/datasets/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?) - in https://huggingface.co/docs/datasets/loading_datasets.html, for the local files sections, it would be nice to provide sample csv / json / text files to download, so that it's easier for the reader to try to load them (instead: they won't try) - the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https://huggingface.co/docs/datasets/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why. - the code example in https://huggingface.co/docs/datasets/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.
closed
https://github.com/huggingface/datasets/pull/2690
2021-07-21T10:43:14
2021-07-27T18:40:54
2021-07-27T18:40:54
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
949,447,104
2,689
cannot save the dataset to disk after rename_column
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]}) In [7]: dataset.save_to_disk('foo') In [8]: dataset=load_from_disk('foo') In [10]: dataset=dataset.rename_column('foo', 'bar') In [11]: dataset.save_to_disk('foo') --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) <ipython-input-11-a3bc0d4fc339> in <module> ----> 1 dataset.save_to_disk('foo') /mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path , fs) 597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths: 598 raise PermissionError( --> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself." 600 ) 601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths: PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself. ``` N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2689
2021-07-21T08:13:40
2025-02-11T23:23:17
2021-07-21T13:11:04
{ "login": "PaulLerner", "id": 25532159, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
949,182,074
2,688
hebrew language codes he and iw should be treated as aliases
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
closed
https://github.com/huggingface/datasets/issues/2688
2021-07-20T23:13:52
2021-07-21T16:34:53
2021-07-21T16:34:53
{ "login": "eyaler", "id": 4436747, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
948,890,481
2,687
Minor documentation fix
Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. This PR fixes that.
closed
https://github.com/huggingface/datasets/pull/2687
2021-07-20T17:43:23
2021-07-21T13:04:55
2021-07-21T13:04:55
{ "login": "slowwavesleep", "id": 44175589, "type": "User" }
[]
true
[]
948,811,669
2,686
Fix bad config ids that name cache directories
`data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded. Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users. I fixed this by ignoring the value of `data_dir` when it's `None` when computing the config_id. I also added a test to make sure the cache directories are not unexpectedly renamed in the future. Fix https://github.com/huggingface/datasets/issues/2683
closed
https://github.com/huggingface/datasets/pull/2686
2021-07-20T16:00:45
2021-07-20T16:27:15
2021-07-20T16:27:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
948,791,572
2,685
Fix Blog Authorship Corpus dataset
This PR: - Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError` - Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files Close #2679.
closed
https://github.com/huggingface/datasets/pull/2685
2021-07-20T15:44:50
2021-07-21T13:11:58
2021-07-21T13:11:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
948,771,753
2,684
Print absolute local paths in load_dataset error messages
Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223
closed
https://github.com/huggingface/datasets/pull/2684
2021-07-20T15:28:28
2021-07-22T20:48:19
2021-07-22T14:01:10
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
948,721,379
2,683
Cache directories changed due to recent changes in how config kwargs are handled
Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example: ```python from datasets import load_dataset_builder c4_builder = load_dataset_builder("c4", "en") print(c4_builder.cache_dir) # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en-174d3b7155eb68db/0.0.0/... # instead of # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en/0.0.0/... ``` This issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets cc @stas00 this is what you experienced a few days ago
closed
https://github.com/huggingface/datasets/issues/2683
2021-07-20T14:37:57
2021-07-20T16:27:15
2021-07-20T16:27:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
948,713,137
2,682
Fix c4 expected files
Some files were not registered in the list of expected files to download Fix https://github.com/huggingface/datasets/issues/2677
closed
https://github.com/huggingface/datasets/pull/2682
2021-07-20T14:29:31
2021-07-20T14:38:11
2021-07-20T14:38:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
948,708,645
2,681
5 duplicate datasets
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838" alt="Capture dโ€™eฬcran 2021-07-20 aฬ€ 16 33 58" src="https://user-images.githubusercontent.com/1676121/126342757-4625522a-f788-41a3-bd1f-2a8b9817bbf5.png"> - https://paperswithcode.com/dataset/squad -> https://huggingface.co/datasets/squad and https://huggingface.co/datasets/squad_v2 - https://paperswithcode.com/dataset/narrativeqa -> https://huggingface.co/datasets/narrativeqa and https://huggingface.co/datasets/narrativeqa_manual - https://paperswithcode.com/dataset/hate-speech-and-offensive-language -> https://huggingface.co/datasets/hate_offensive and https://huggingface.co/datasets/hate_speech_offensive - https://paperswithcode.com/dataset/newsph-nli -> https://huggingface.co/datasets/newsph and https://huggingface.co/datasets/newsph_nli Possible solutions: - don't fix (it works) - for each pair of duplicate datasets, remove one, and create an alias to the other. ## Steps to reproduce the bug Visit the Paperswithcode links, and look at the "Dataset Loaders" section ## Expected results There should only be one reference to a Hugging Face dataset loader ## Actual results Two Hugging Face dataset loaders
closed
https://github.com/huggingface/datasets/issues/2681
2021-07-20T14:25:00
2021-07-20T15:44:17
2021-07-20T15:44:17
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
948,649,716
2,680
feat: ๐ŸŽธ add paperswithcode id for qasper dataset
The reverse reference exists on paperswithcode: https://paperswithcode.com/dataset/qasper
closed
https://github.com/huggingface/datasets/pull/2680
2021-07-20T13:22:29
2021-07-20T14:04:10
2021-07-20T14:04:10
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
948,506,638
2,679
Cannot load the blog_authorship_corpus due to codec errors
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error similar to the one below was raised for (what seems like) every XML file. /home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset builder_instance.download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare self._download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2679
2021-07-20T10:13:20
2021-07-21T17:02:21
2021-07-21T13:11:58
{ "login": "izaskr", "id": 38069449, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
948,471,222
2,678
Import Error in Kaggle notebook
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-input-9-652e886d387f> in <module> ----> 1 import datasets /opt/conda/lib/python3.7/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in <module> 36 import pandas as pd 37 import pyarrow as pa ---> 38 import pyarrow.compute as pc 39 from multiprocess import Pool, RLock 40 from tqdm.auto import tqdm /opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in <module> 16 # under the License. 17 ---> 18 from pyarrow._compute import ( # noqa 19 Function, 20 FunctionOptions, ImportError: /opt/conda/lib/python3.7/site-packages/pyarrow/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Kaggle - Python version: 3.7.10 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2678
2021-07-20T09:28:38
2021-07-21T13:59:26
2021-07-21T13:03:02
{ "login": "prikmm", "id": 47216475, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
948,429,788
2,677
Error when downloading C4
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="ะกะฝะธะผะพะบ ัะบั€ะฐะฝะฐ 2021-07-20 ะฒ 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png">
closed
https://github.com/huggingface/datasets/issues/2677
2021-07-20T08:37:30
2021-07-20T14:41:31
2021-07-20T14:38:10
{ "login": "Aktsvigun", "id": 36672861, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
947,734,909
2,676
Increase json reader block_size automatically
Currently some files can't be read with the default parameters of the JSON lines reader. For example this one: https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz raises a pyarrow error: ```python ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)). To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines. By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file. cc @thomwolf @albertvillanova
closed
https://github.com/huggingface/datasets/pull/2676
2021-07-19T14:51:14
2021-07-19T17:51:39
2021-07-19T17:51:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
947,657,732
2,675
Parallelize ETag requests
Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed. In this I made the ETag requests parallel using multithreading. There is also a tqdm progress bar that shows up if there are more than 16 data files.
closed
https://github.com/huggingface/datasets/pull/2675
2021-07-19T13:30:42
2021-07-19T19:33:25
2021-07-19T19:33:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
947,338,202
2,674
Fix sacrebleu parameter name
DONE: - Fix parameter name: `smooth` to `smooth_method`. - Improve kwargs description. - Align docs on using a metric. - Add example of passing additional arguments in using metrics. Related to #2669.
closed
https://github.com/huggingface/datasets/pull/2674
2021-07-19T07:07:26
2021-07-19T08:07:03
2021-07-19T08:07:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
947,300,008
2,673
Fix potential DuplicatedKeysError in SQuAD
DONE: - Fix potential DiplicatedKeysError by ensuring keys are unique. - Align examples in the docs with SQuAD code. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
closed
https://github.com/huggingface/datasets/pull/2673
2021-07-19T06:08:00
2021-07-19T07:08:03
2021-07-19T07:08:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
947,294,605
2,672
Fix potential DuplicatedKeysError in LibriSpeech
DONE: - Fix unnecessary path join. - Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
closed
https://github.com/huggingface/datasets/pull/2672
2021-07-19T06:00:49
2021-07-19T06:28:57
2021-07-19T06:28:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
947,273,875
2,671
Mesinesp development and training data sets have been added.
https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) training set has a total of 369,368 records.
closed
https://github.com/huggingface/datasets/pull/2671
2021-07-19T05:14:38
2021-07-19T07:32:28
2021-07-19T06:45:50
{ "login": "aslihanuysall", "id": 32900185, "type": "User" }
[]
true
[]
947,120,709
2,670
Using sharding to parallelize indexing
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset. **Describe alternatives you've considered** Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25). **Additional context** The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
open
https://github.com/huggingface/datasets/issues/2670
2021-07-18T21:26:26
2021-10-07T13:33:25
null
{ "login": "ggdupont", "id": 5583410, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
946,982,998
2,669
Metric kwargs are not passed to underlying external metric f1_score
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to reproduce the bug ```python import datasets f1 = datasets.load_metric("f1", keep_in_memory=True, average="min") f1.add_batch(predictions=[0,2,3], references=[1, 2, 3]) f1.compute() ``` ## Expected results No error, because `average="min"` should be passed correctly to f1_score in sklearn. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute "f1": f1_score( File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score return fbeta_score(y_true, y_pred, beta=1, labels=labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score _, _, f, _ = precision_recall_fscore_support(y_true, y_pred, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support labels = _check_set_wise_labels(y_true, y_pred, average, labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels raise ValueError("Target is %s but average='binary'. Please " ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2669
2021-07-18T08:32:31
2021-07-18T18:36:05
2021-07-18T11:19:04
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
946,867,622
2,668
Add Russian SuperGLUE
Hi, This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
closed
https://github.com/huggingface/datasets/pull/2668
2021-07-17T17:41:28
2021-07-29T11:50:31
2021-07-29T11:50:31
{ "login": "slowwavesleep", "id": 44175589, "type": "User" }
[]
true
[]
946,861,908
2,667
Use tqdm from tqdm_utils
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit ("ba" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https://github.com/huggingface/datasets/issues/2657).
closed
https://github.com/huggingface/datasets/pull/2667
2021-07-17T17:06:35
2021-07-19T17:39:10
2021-07-19T17:32:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
946,825,140
2,666
Adds CodeClippy dataset [WIP]
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week https://the-eye.eu/public/AI/training_data/code_clippy_data/
closed
https://github.com/huggingface/datasets/pull/2666
2021-07-17T13:32:04
2023-07-26T23:06:01
2022-10-03T09:37:35
{ "login": "arampacha", "id": 69807323, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
946,822,036
2,665
Adds APPS dataset to the hub [WIP]
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
closed
https://github.com/huggingface/datasets/pull/2665
2021-07-17T13:13:17
2022-10-03T09:38:10
2022-10-03T09:38:10
{ "login": "arampacha", "id": 69807323, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
946,552,273
2,663
[`to_json`] add multi-proc sharding support
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
closed
https://github.com/huggingface/datasets/issues/2663
2021-07-16T19:41:50
2021-09-13T13:56:37
2021-09-13T13:56:37
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]