id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,226,806,652
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
closed
[ "So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src...
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('crime_and_punish', split='train[:100]') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None` ``` ## Expected results A new column named `embeddings` in the dataset that we're adding the index to. ## Actual results An exception is triggered with the following message `NameError: name 'faiss' is not defined`. ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
alvarobartt
https://github.com/huggingface/datasets/issues/4287
null
false
1,226,758,621
4,286
Add Lahnda language tag
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
mariosasko
https://github.com/huggingface/datasets/pull/4286
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4286", "html_url": "https://github.com/huggingface/datasets/pull/4286", "diff_url": "https://github.com/huggingface/datasets/pull/4286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4286.patch", "merged_at": "2022-05-10T12:02:37" }
true
1,226,374,831
4,285
Update LexGLUE README.md
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
iliaschalkidis
https://github.com/huggingface/datasets/pull/4285
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4285", "html_url": "https://github.com/huggingface/datasets/pull/4285", "diff_url": "https://github.com/huggingface/datasets/pull/4285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4285.patch", "merged_at": "2022-05-05T13:33:35" }
true
1,226,200,727
4,284
Issues in processing very large datasets
closed
[ "Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?", "Closing this issue due to inactivity." ]
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. Here are my modifications to `run_summarization.py` code. ``` # loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph graph_data_train = get_graph_data('train') graph_data_validation = get_graph_data('val') ... ... with training_args.main_process_first(desc="train dataset map pre-processing"): train_dataset = train_dataset.map( preprocess_function_train, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) ``` And here is the modified preprocessed function: ``` def preprocess_function_train(examples): inputs, targets, sub_graphs, ids = [], [], [], [] for i in range(len(examples[text_column])): if examples[text_column][i] is not None and examples[summary_column][i] is not None: # if examples['doc_id'][i] in graph_data.keys(): inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) sub_graphs.append(graph_data_train[examples['id'][i]]) ids.append(examples['id'][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True, sub_graphs=sub_graphs, ids=ids) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and data_args.ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] return model_inputs ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
sajastu
https://github.com/huggingface/datasets/issues/4284
null
false
1,225,686,988
4,283
Fix filesystem docstring
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
stevhliu
https://github.com/huggingface/datasets/pull/4283
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4283", "html_url": "https://github.com/huggingface/datasets/pull/4283", "diff_url": "https://github.com/huggingface/datasets/pull/4283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4283.patch", "merged_at": "2022-05-06T06:22:17" }
true
1,225,616,545
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?", "Right ! Good catch, thanks, I updated the message to say ...
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676 This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type. In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary. I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # before: # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] # # now: # b # 0 [None, [0]] # 1 [None, [0]] # 2 [None, [0]] # 3 [None, [0]] ``` cc @sgugger
lhoestq
https://github.com/huggingface/datasets/pull/4282
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4282", "html_url": "https://github.com/huggingface/datasets/pull/4282", "diff_url": "https://github.com/huggingface/datasets/pull/4282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4282.patch", "merged_at": "2022-05-06T10:37:00" }
true
1,225,556,939
4,281
Remove a copy-paste sentence in dataset cards
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests have nothing to do with this PR." ]
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
albertvillanova
https://github.com/huggingface/datasets/pull/4281
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4281", "html_url": "https://github.com/huggingface/datasets/pull/4281", "diff_url": "https://github.com/huggingface/datasets/pull/4281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4281.patch", "merged_at": "2022-05-04T18:33:16" }
true
1,225,446,844
4,280
Add missing features to commonsense_qa dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ", "Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may addre...
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
Fix partially #4275.
albertvillanova
https://github.com/huggingface/datasets/pull/4280
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4280", "html_url": "https://github.com/huggingface/datasets/pull/4280", "diff_url": "https://github.com/huggingface/datasets/pull/4280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4280.patch", "merged_at": "2022-05-06T14:16:46" }
true
1,225,300,273
4,279
Update minimal PyArrow version warning
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
Update the minimal PyArrow version warning (should've been part of #4250).
mariosasko
https://github.com/huggingface/datasets/pull/4279
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4279", "html_url": "https://github.com/huggingface/datasets/pull/4279", "diff_url": "https://github.com/huggingface/datasets/pull/4279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4279.patch", "merged_at": "2022-05-05T08:43:47" }
true
1,225,122,123
4,278
Add missing features to openbookqa dataset for additional config
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the data feature structure." ]
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
Fix partially #4276.
albertvillanova
https://github.com/huggingface/datasets/pull/4278
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4278", "html_url": "https://github.com/huggingface/datasets/pull/4278", "diff_url": "https://github.com/huggingface/datasets/pull/4278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4278.patch", "merged_at": "2022-05-06T13:06:01" }
true
1,225,002,286
4,277
Enable label alignment for token classification datasets
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again ...
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0, 7, 0, 0] ner_ds[0]["ner_tags"] # hypothetical model mapping with O <--> B-LOC label2id = { "B-LOC": "0", "B-MISC": "7", "B-ORG": "3", "B-PER": "1", "I-LOC": "6", "I-MISC": "8", "I-ORG": "4", "I-PER": "2", "O": "5" } ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags") # returns [3, 5, 7, 5, 5, 5, 7, 5, 5] ner_aligned_ds[0]["ner_tags"] ``` Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur
lewtun
https://github.com/huggingface/datasets/pull/4277
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4277", "html_url": "https://github.com/huggingface/datasets/pull/4277", "diff_url": "https://github.com/huggingface/datasets/pull/4277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4277.patch", "merged_at": "2022-05-06T15:36:31" }
true
1,224,949,252
4,276
OpenBookQA has missing and inconsistent field names
closed
[ "Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ", "Ok, awesome @albertvillanova How about #4275 ?", "On the other hand, I am not sure if we should always preserve the original nested structure. I think we shoul...
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
vblagoje
https://github.com/huggingface/datasets/issues/4276
null
false
1,224,943,414
4,275
CommonSenseQA has missing and inconsistent field names
open
[ "Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. " ]
2022-05-04T05:38:59
2022-05-04T11:41:18
null
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it 3. Add the missing "question_concept" field in the question tree node 4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original ## Expected results Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
vblagoje
https://github.com/huggingface/datasets/issues/4275
null
false
1,224,740,303
4,274
Add API code examples for IterableDataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
stevhliu
https://github.com/huggingface/datasets/pull/4274
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4274", "html_url": "https://github.com/huggingface/datasets/pull/4274", "diff_url": "https://github.com/huggingface/datasets/pull/4274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4274.patch", "merged_at": "2022-05-04T16:22:04" }
true
1,224,681,036
4,273
leadboard info added for TNE
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
null
yanaiela
https://github.com/huggingface/datasets/pull/4273
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4273", "html_url": "https://github.com/huggingface/datasets/pull/4273", "diff_url": "https://github.com/huggingface/datasets/pull/4273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4273.patch", "merged_at": "2022-05-05T13:18:13" }
true
1,224,635,660
4,272
Fix typo in logging docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".", "Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n...
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
This PR fixes #4271.
stevhliu
https://github.com/huggingface/datasets/pull/4272
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4272", "html_url": "https://github.com/huggingface/datasets/pull/4272", "diff_url": "https://github.com/huggingface/datasets/pull/4272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4272.patch", "merged_at": "2022-05-04T06:58:35" }
true
1,224,404,403
4,271
A typo in docs of datasets.disable_progress_bar
closed
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
jiangwangyi
https://github.com/huggingface/datasets/issues/4271
null
false
1,224,244,460
4,270
Fix style in openbookqa dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
albertvillanova
https://github.com/huggingface/datasets/pull/4270
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4270", "html_url": "https://github.com/huggingface/datasets/pull/4270", "diff_url": "https://github.com/huggingface/datasets/pull/4270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4270.patch", "merged_at": "2022-05-03T16:20:52" }
true
1,223,865,145
4,269
Add license and point of contact to big_patent dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
Update metadata of big_patent dataset with: - license - point of contact
albertvillanova
https://github.com/huggingface/datasets/pull/4269
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4269", "html_url": "https://github.com/huggingface/datasets/pull/4269", "diff_url": "https://github.com/huggingface/datasets/pull/4269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4269.patch", "merged_at": "2022-05-03T11:16:19" }
true
1,223,331,964
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
closed
[ "It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wi...
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
i-am-neo
https://github.com/huggingface/datasets/issues/4268
null
false
1,223,214,275
4,267
Replace data URL in SAMSum dataset within the same repository
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
Replace data URL with one in the same repository.
albertvillanova
https://github.com/huggingface/datasets/pull/4267
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4267", "html_url": "https://github.com/huggingface/datasets/pull/4267", "diff_url": "https://github.com/huggingface/datasets/pull/4267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4267.patch", "merged_at": "2022-05-02T19:03:49" }
true
1,223,116,436
4,266
Add HF Speech Bench to Librispeech Dataset Card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions? cc @patrickvonplaten: more leaderboard promotion!
sanchit-gandhi
https://github.com/huggingface/datasets/pull/4266
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4266", "html_url": "https://github.com/huggingface/datasets/pull/4266", "diff_url": "https://github.com/huggingface/datasets/pull/4266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4266.patch", "merged_at": "2022-05-05T08:40:09" }
true
1,222,723,083
4,263
Rename imagenet2012 -> imagenet-1k
closed
[ "> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?", "> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nE...
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub. EDIT: to complete the rationale on why we should name it `imagenet-1k`: If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they - wanted to make it explicit that it’s not 21k -> the distinction is important for the community - or they have been following this convention from other models -> the convention implicitly exists already
lhoestq
https://github.com/huggingface/datasets/pull/4263
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4263", "html_url": "https://github.com/huggingface/datasets/pull/4263", "diff_url": "https://github.com/huggingface/datasets/pull/4263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4263.patch", "merged_at": "2022-05-02T16:32:57" }
true
1,222,130,749
4,262
Add YAML tags to Dataset Card rotten tomatoes
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
mo6zes
https://github.com/huggingface/datasets/pull/4262
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4262", "html_url": "https://github.com/huggingface/datasets/pull/4262", "diff_url": "https://github.com/huggingface/datasets/pull/4262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4262.patch", "merged_at": "2022-05-03T14:20:35" }
true
1,221,883,779
4,261
data leakage in `webis/conclugen` dataset
closed
[ "Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.", "i'd suggest just pinging the authors here ...
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```python from datasets import load_dataset training = load_dataset("webis/conclugen", "base", split="train") validation = load_dataset("webis/conclugen", "base", split="validation") testing = load_dataset("webis/conclugen", "base", split="test") # collect which sample id's are present in the training split ids_validation = list() ids_testing = list() for train_sample in training: train_argument = train_sample["argument"] train_conclusion = train_sample["conclusion"] train_id = train_sample["id"] # test if current sample is in validation split if train_argument in validation["argument"]: for validation_sample in validation: validation_argument = validation_sample["argument"] validation_conclusion = validation_sample["conclusion"] validation_id = validation_sample["id"] if train_argument == validation_argument and train_conclusion == validation_conclusion: ids_validation.append(validation_id) # test if current sample is in test split if train_argument in testing["argument"]: for testing_sample in testing: testing_argument = testing_sample["argument"] testing_conclusion = testing_sample["conclusion"] testing_id = testing_sample["id"] if train_argument == testing_argument and train_conclusion == testing_conclusion: ids_testing.append(testing_id) ``` ## Expected results Length of both lists `ids_validation` and `ids_testing` should be zero. ## Actual results Length of `ids_validation` = `2556` Length of `ids_testing` = `287` Furthermore, there seems to be duplicate samples in (at least) the *training* split, since: `print(len(set(ids_validation)))` = `950` `print(len(set(ids_testing)))` = `101` All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
xflashxx
https://github.com/huggingface/datasets/issues/4261
null
false
1,221,830,292
4,260
Add mr_polarity movie review sentiment classification
closed
[ "whoops just found https://huggingface.co/datasets/rotten_tomatoes" ]
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative". Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/) paperswithcode: [https://paperswithcode.com/dataset/mr](https://paperswithcode.com/dataset/mr) - [ ] I was not able to generate dummy data, the original dataset files have ".pos" and ".neg" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?
mo6zes
https://github.com/huggingface/datasets/pull/4260
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4260", "html_url": "https://github.com/huggingface/datasets/pull/4260", "diff_url": "https://github.com/huggingface/datasets/pull/4260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4260.patch", "merged_at": null }
true
1,221,768,025
4,259
Fix bug in choices labels in openbookqa dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550. Fix #3550. cc. @lhoestq @mariosasko
manandey
https://github.com/huggingface/datasets/pull/4259
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4259", "html_url": "https://github.com/huggingface/datasets/pull/4259", "diff_url": "https://github.com/huggingface/datasets/pull/4259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4259.patch", "merged_at": "2022-05-03T15:14:21" }
true
1,221,637,727
4,258
Fix/start token mask issue and update documentation
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will kee...
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
TristanThrush
https://github.com/huggingface/datasets/pull/4258
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4258", "html_url": "https://github.com/huggingface/datasets/pull/4258", "diff_url": "https://github.com/huggingface/datasets/pull/4258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4258.patch", "merged_at": "2022-05-02T16:26:12" }
true
1,221,393,137
4,257
Create metric card for Mahalanobis Distance
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
sashavor
https://github.com/huggingface/datasets/pull/4257
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4257", "html_url": "https://github.com/huggingface/datasets/pull/4257", "diff_url": "https://github.com/huggingface/datasets/pull/4257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4257.patch", "merged_at": "2022-05-02T14:43:24" }
true
1,221,379,625
4,256
Create metric card for MSE
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
Proposing a metric card for Mean Squared Error
sashavor
https://github.com/huggingface/datasets/pull/4256
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4256", "html_url": "https://github.com/huggingface/datasets/pull/4256", "diff_url": "https://github.com/huggingface/datasets/pull/4256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4256.patch", "merged_at": "2022-05-02T14:48:47" }
true
1,221,142,899
4,255
No google drive URL for pubmed_qa
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI is failing because some sections are missing in the dataset card, but this is unrelated to this PR - Merging !" ]
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
lhoestq
https://github.com/huggingface/datasets/pull/4255
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4255", "html_url": "https://github.com/huggingface/datasets/pull/4255", "diff_url": "https://github.com/huggingface/datasets/pull/4255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4255.patch", "merged_at": "2022-04-29T16:18:56" }
true
1,220,204,395
4,254
Replace data URL in SAMSum dataset and support streaming
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/4254
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4254", "html_url": "https://github.com/huggingface/datasets/pull/4254", "diff_url": "https://github.com/huggingface/datasets/pull/4254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4254.patch", "merged_at": "2022-04-29T16:26:08" }
true
1,219,286,408
4,253
Create metric cards for mean IOU
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
sashavor
https://github.com/huggingface/datasets/pull/4253
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4253", "html_url": "https://github.com/huggingface/datasets/pull/4253", "diff_url": "https://github.com/huggingface/datasets/pull/4253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4253.patch", "merged_at": "2022-04-29T17:38:06" }
true
1,219,151,100
4,252
Creating metric card for MAE
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
Initial proposal for MAE metric card
sashavor
https://github.com/huggingface/datasets/pull/4252
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4252", "html_url": "https://github.com/huggingface/datasets/pull/4252", "diff_url": "https://github.com/huggingface/datasets/pull/4252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4252.patch", "merged_at": "2022-04-29T16:52:30" }
true
1,219,116,354
4,251
Metric card for the XTREME-S dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
Proposing a metric card for the XTREME-S dataset :hugs:
sashavor
https://github.com/huggingface/datasets/pull/4251
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4251", "html_url": "https://github.com/huggingface/datasets/pull/4251", "diff_url": "https://github.com/huggingface/datasets/pull/4251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4251.patch", "merged_at": "2022-04-29T16:38:46" }
true
1,219,093,830
4,250
Bump PyArrow Version to 6
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Updated meta.yaml as well. Thanks.", "I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.", "> but ma...
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
dnaveenr
https://github.com/huggingface/datasets/pull/4250
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4250", "html_url": "https://github.com/huggingface/datasets/pull/4250", "diff_url": "https://github.com/huggingface/datasets/pull/4250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4250.patch", "merged_at": "2022-05-04T09:29:46" }
true
1,218,524,424
4,249
Support streaming XGLUE dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
Support streaming XGLUE dataset. Fix #4247. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/4249
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4249", "html_url": "https://github.com/huggingface/datasets/pull/4249", "diff_url": "https://github.com/huggingface/datasets/pull/4249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4249.patch", "merged_at": "2022-04-28T16:08:03" }
true
1,218,460,444
4,248
conll2003 dataset loads original data.
closed
[ "Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n fea...
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset dataset = load_dataset("conll2003") ``` ## Expected results { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ## Actual results ```python print(dataset) DatasetDict({ train: Dataset({ features: ['text'], num_rows: 219554 }) test: Dataset({ features: ['text'], num_rows: 50350 }) validation: Dataset({ features: ['text'], num_rows: 55044 }) }) ``` ```python for i in range(20): print(dataset['train'][i]) {'text': '-DOCSTART- -X- -X- O'} {'text': ''} {'text': 'EU NNP B-NP B-ORG'} {'text': 'rejects VBZ B-VP O'} {'text': 'German JJ B-NP B-MISC'} {'text': 'call NN I-NP O'} {'text': 'to TO B-VP O'} {'text': 'boycott VB I-VP O'} {'text': 'British JJ B-NP B-MISC'} {'text': 'lamb NN I-NP O'} {'text': '. . O O'} {'text': ''} {'text': 'Peter NNP B-NP B-PER'} {'text': 'Blackburn NNP I-NP I-PER'} {'text': ''} {'text': 'BRUSSELS NNP B-NP B-LOC'} {'text': '1996-08-22 CD I-NP O'} {'text': ''} {'text': 'The DT B-NP O'} {'text': 'European NNP I-NP B-ORG'} ```
sue991
https://github.com/huggingface/datasets/issues/4248
null
false
1,218,320,882
4,247
The data preview of XGLUE
closed
[ "![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n", "Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (...
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
It seems that something wrong with the data previvew of XGLUE
czq1999
https://github.com/huggingface/datasets/issues/4247
null
false
1,218,320,293
4,246
Support to load dataset with TSV files by passing only dataset name
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
albertvillanova
https://github.com/huggingface/datasets/pull/4246
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4246", "html_url": "https://github.com/huggingface/datasets/pull/4246", "diff_url": "https://github.com/huggingface/datasets/pull/4246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4246.patch", "merged_at": "2022-05-06T08:14:07" }
true
1,217,959,400
4,245
Add code examples for DatasetDict
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
This PR adds code examples for `DatasetDict` in the API reference :)
stevhliu
https://github.com/huggingface/datasets/pull/4245
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4245", "html_url": "https://github.com/huggingface/datasets/pull/4245", "diff_url": "https://github.com/huggingface/datasets/pull/4245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4245.patch", "merged_at": "2022-04-29T18:13:03" }
true
1,217,732,221
4,244
task id update
closed
[ "Reverted the multi-input-text-classification tag from task_categories and added it as task_ids @lhoestq ", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
changed multi input text classification as task id instead of category
nazneenrajani
https://github.com/huggingface/datasets/pull/4244
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4244", "html_url": "https://github.com/huggingface/datasets/pull/4244", "diff_url": "https://github.com/huggingface/datasets/pull/4244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4244.patch", "merged_at": "2022-05-04T10:36:37" }
true
1,217,689,909
4,243
WIP: Initial shades loading script and readme
closed
[ "Thanks for your contribution, @shayne-longpre.\r\n\r\nAre you still interested in adding this dataset? As we are transferring the dataset scripts from this GitHub repo, we would recommend you to add this to the Hugging Face Hub: https://huggingface.co/datasets" ]
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
null
shayne-longpre
https://github.com/huggingface/datasets/pull/4243
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4243", "html_url": "https://github.com/huggingface/datasets/pull/4243", "diff_url": "https://github.com/huggingface/datasets/pull/4243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4243.patch", "merged_at": null }
true
1,217,665,960
4,242
Update auth when mirroring datasets on the hub
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
lhoestq
https://github.com/huggingface/datasets/pull/4242
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4242", "html_url": "https://github.com/huggingface/datasets/pull/4242", "diff_url": "https://github.com/huggingface/datasets/pull/4242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4242.patch", "merged_at": "2022-04-27T17:30:42" }
true
1,217,423,686
4,241
NonMatchingChecksumError when attempting to download GLUE
closed
[ "Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_data...
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to download without an error. ## Actual results ``` INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports. INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805 Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0... Downloading: 100%|██████████| 73.0/73.0 [00:00<00:00, 73.9kB/s] INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-7-669a8343dcc1> in <module> ----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 458 # Checksums verification 459 if verify_infos: --> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 461 for split_generator in split_generators: 462 if str(split_generator.split_info.name).lower() == "all": ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa - Python version: 3.6.13 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
drussellmrichie
https://github.com/huggingface/datasets/issues/4241
null
false
1,217,287,594
4,240
Fix yield for crd3
closed
[ "I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str...
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": datasets.features.Sequence(datasets.Value("string")), "number": datasets.Value("int32"), } ], } ``` I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this?
shanyas10
https://github.com/huggingface/datasets/pull/4240
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4240", "html_url": "https://github.com/huggingface/datasets/pull/4240", "diff_url": "https://github.com/huggingface/datasets/pull/4240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4240.patch", "merged_at": "2022-04-29T12:41:41" }
true
1,217,269,689
4,239
Small fixes in ROC AUC docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
wschella
https://github.com/huggingface/datasets/pull/4239
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4239", "html_url": "https://github.com/huggingface/datasets/pull/4239", "diff_url": "https://github.com/huggingface/datasets/pull/4239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4239.patch", "merged_at": "2022-05-02T13:22:03" }
true
1,217,168,123
4,238
Dataset caching policy
closed
[ "Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that y...
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally: ```python from datasets import load_dataset_builder dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences") print(dataset_builder.cache_dir) print(dataset_builder.info.features) print(dataset_builder.info.splits) ``` ``` Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519 None None ``` and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`. Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last. Thank you. ## Steps to reproduce the bug ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], ) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() ``` ## Expected results Properly tokenize dataset file `test.csv` without issues. ## Actual results Specify the actual results or traceback. ``` Downloading data files: 100% 2/2 [00:16<00:00, 7.34s/it] Downloading data: 100% 391M/391M [00:12<00:00, 36.6MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 40.0MB/s] Extracting data files: 100% 2/2 [00:00<00:00, 47.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 2/2 [00:00<00:00, 25.94it/s] 11% 942339/8256449 [01:55<13:11, 9245.85ex/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>() 12 ) 13 # You can make this part faster with num_proc=<some int> ---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) 15 sentences = sentences.shuffle() 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 - ``` ``` - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ```
loretoparisi
https://github.com/huggingface/datasets/issues/4238
null
false
1,217,121,044
4,237
Common Voice 8 doesn't show datasets viewer
closed
[ "Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|███████████████████████████...
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
patrickvonplaten
https://github.com/huggingface/datasets/issues/4237
null
false
1,217,115,691
4,236
Replace data URL in big_patent dataset and support streaming
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred...
2022-04-27T10:01:13
2022-06-10T08:10:55
2022-05-02T18:21:15
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
albertvillanova
https://github.com/huggingface/datasets/pull/4236
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4236", "html_url": "https://github.com/huggingface/datasets/pull/4236", "diff_url": "https://github.com/huggingface/datasets/pull/4236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4236.patch", "merged_at": "2022-05-02T18:21:15" }
true
1,216,952,640
4,235
How to load VERY LARGE dataset?
closed
[ "The `Trainer` support `IterableDataset`, not just datasets." ]
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. ``` ### Who can help? Trainer: @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html. Thanks a lot! ```
CaoYiqingT
https://github.com/huggingface/datasets/issues/4235
null
false
1,216,818,846
4,234
Autoeval config
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424", "The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argum...
2022-04-27T05:32:10
2022-05-06T13:20:31
2022-05-05T18:20:58
Added autoeval config to imdb as pilot
nazneenrajani
https://github.com/huggingface/datasets/pull/4234
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4234", "html_url": "https://github.com/huggingface/datasets/pull/4234", "diff_url": "https://github.com/huggingface/datasets/pull/4234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4234.patch", "merged_at": "2022-05-05T18:20:58" }
true
1,216,665,044
4,233
Autoeval
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4233). All of your documentation changes will be reflected on that endpoint." ]
2022-04-27T01:32:09
2022-04-27T05:29:30
2022-04-27T01:32:23
null
nazneenrajani
https://github.com/huggingface/datasets/pull/4233
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4233", "html_url": "https://github.com/huggingface/datasets/pull/4233", "diff_url": "https://github.com/huggingface/datasets/pull/4233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4233.patch", "merged_at": null }
true
1,216,659,444
4,232
adding new tag to tasks.json and modified for existing datasets
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "closing in favor of https://github.com/huggingface/datasets/pull/4244" ]
2022-04-27T01:21:09
2022-05-03T14:23:56
2022-05-03T14:16:39
null
nazneenrajani
https://github.com/huggingface/datasets/pull/4232
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4232", "html_url": "https://github.com/huggingface/datasets/pull/4232", "diff_url": "https://github.com/huggingface/datasets/pull/4232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4232.patch", "merged_at": null }
true
1,216,651,960
4,231
Fix invalid url to CC-Aligned dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-27T01:07:01
2022-05-16T17:01:13
2022-05-16T16:53:12
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
juntang-zhuang
https://github.com/huggingface/datasets/pull/4231
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4231", "html_url": "https://github.com/huggingface/datasets/pull/4231", "diff_url": "https://github.com/huggingface/datasets/pull/4231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4231.patch", "merged_at": "2022-05-16T16:53:12" }
true
1,216,643,661
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
closed
[ "Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.", "The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states...
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
beyondguo
https://github.com/huggingface/datasets/issues/4230
null
false
1,216,638,968
4,229
new task tag
closed
[]
2022-04-27T00:47:08
2022-04-27T00:48:28
2022-04-27T00:48:17
multi-input-text-classification tag for classification datasets that take more than one input
nazneenrajani
https://github.com/huggingface/datasets/pull/4229
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4229", "html_url": "https://github.com/huggingface/datasets/pull/4229", "diff_url": "https://github.com/huggingface/datasets/pull/4229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4229.patch", "merged_at": null }
true
1,216,523,043
4,228
new task tag
closed
[]
2022-04-26T22:00:33
2022-04-27T00:48:31
2022-04-27T00:46:31
multi-input-text-classification tag for classification datasets that take more than one input
nazneenrajani
https://github.com/huggingface/datasets/pull/4228
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4228", "html_url": "https://github.com/huggingface/datasets/pull/4228", "diff_url": "https://github.com/huggingface/datasets/pull/4228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4228.patch", "merged_at": null }
true
1,216,455,316
4,227
Add f1 metric card, update docstring in py file
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-26T20:41:03
2022-05-03T12:50:23
2022-05-03T12:43:33
null
emibaylor
https://github.com/huggingface/datasets/pull/4227
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4227", "html_url": "https://github.com/huggingface/datasets/pull/4227", "diff_url": "https://github.com/huggingface/datasets/pull/4227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4227.patch", "merged_at": "2022-05-03T12:43:33" }
true
1,216,331,073
4,226
Add pearsonr mc, update functionality to match the original docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "thank you @lhoestq!! :hugs: " ]
2022-04-26T18:30:46
2022-05-03T17:09:24
2022-05-03T17:02:28
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
emibaylor
https://github.com/huggingface/datasets/pull/4226
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4226", "html_url": "https://github.com/huggingface/datasets/pull/4226", "diff_url": "https://github.com/huggingface/datasets/pull/4226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4226.patch", "merged_at": "2022-05-03T17:02:28" }
true
1,216,213,464
4,225
autoeval config
closed
[]
2022-04-26T16:38:34
2022-04-27T00:48:31
2022-04-26T22:00:26
add train eval index for autoeval
nazneenrajani
https://github.com/huggingface/datasets/pull/4225
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4225", "html_url": "https://github.com/huggingface/datasets/pull/4225", "diff_url": "https://github.com/huggingface/datasets/pull/4225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4225.patch", "merged_at": null }
true
1,216,209,667
4,224
autoeval config
closed
[]
2022-04-26T16:35:19
2022-04-26T16:36:45
2022-04-26T16:36:45
add train eval index for autoeval
nazneenrajani
https://github.com/huggingface/datasets/pull/4224
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4224", "html_url": "https://github.com/huggingface/datasets/pull/4224", "diff_url": "https://github.com/huggingface/datasets/pull/4224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4224.patch", "merged_at": null }
true
1,216,107,082
4,223
Add Accuracy Metric Card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-26T15:10:46
2022-05-03T14:27:45
2022-05-03T14:20:47
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
emibaylor
https://github.com/huggingface/datasets/pull/4223
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4223", "html_url": "https://github.com/huggingface/datasets/pull/4223", "diff_url": "https://github.com/huggingface/datasets/pull/4223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4223.patch", "merged_at": "2022-05-03T14:20:47" }
true
1,216,056,439
4,222
Fix description links in dataset cards
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Non passing tests are due to other pre-existing errors in dataset cards: not related to this PR." ]
2022-04-26T14:36:25
2022-05-06T08:38:38
2022-04-26T16:52:29
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
albertvillanova
https://github.com/huggingface/datasets/pull/4222
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4222", "html_url": "https://github.com/huggingface/datasets/pull/4222", "diff_url": "https://github.com/huggingface/datasets/pull/4222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4222.patch", "merged_at": "2022-04-26T16:52:29" }
true
1,215,911,182
4,221
Dictionary Feature
closed
[ "Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n...
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
jordiae
https://github.com/huggingface/datasets/issues/4221
null
false
1,215,225,802
4,220
Altered faiss installation comment
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! Can you explain why this change is needed ?", "Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The lates...
2022-04-26T01:20:43
2022-05-09T17:29:34
2022-05-09T17:22:09
null
vishalsrao
https://github.com/huggingface/datasets/pull/4220
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4220", "html_url": "https://github.com/huggingface/datasets/pull/4220", "diff_url": "https://github.com/huggingface/datasets/pull/4220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4220.patch", "merged_at": "2022-05-09T17:22:09" }
true
1,214,934,025
4,219
Add F1 Metric Card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-25T19:14:56
2022-04-26T20:44:18
2022-04-26T20:37:46
null
emibaylor
https://github.com/huggingface/datasets/pull/4219
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4219", "html_url": "https://github.com/huggingface/datasets/pull/4219", "diff_url": "https://github.com/huggingface/datasets/pull/4219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4219.patch", "merged_at": null }
true
1,214,748,226
4,218
Make code for image downloading from image urls cacheable
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-25T16:17:59
2022-04-26T17:00:24
2022-04-26T13:38:26
Fix #4199
mariosasko
https://github.com/huggingface/datasets/pull/4218
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4218", "html_url": "https://github.com/huggingface/datasets/pull/4218", "diff_url": "https://github.com/huggingface/datasets/pull/4218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4218.patch", "merged_at": "2022-04-26T13:38:26" }
true
1,214,688,141
4,217
Big_Patent dataset broken
closed
[ "Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://gith...
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
Matthew-Larsen
https://github.com/huggingface/datasets/issues/4217
null
false
1,214,614,029
4,216
Avoid recursion error in map if example is returned as dict value
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-25T14:40:32
2022-05-04T17:20:06
2022-05-04T17:12:52
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": ex}) ``` and this is the fix for it (before this PR): ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": dict(ex)}) ``` Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries. P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks.
mariosasko
https://github.com/huggingface/datasets/pull/4216
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4216", "html_url": "https://github.com/huggingface/datasets/pull/4216", "diff_url": "https://github.com/huggingface/datasets/pull/4216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4216.patch", "merged_at": "2022-05-04T17:12:52" }
true
1,214,579,162
4,215
Add `drop_last_batch` to `IterableDataset.map`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-25T14:15:19
2022-05-03T15:56:07
2022-05-03T15:48:54
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
mariosasko
https://github.com/huggingface/datasets/pull/4215
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4215", "html_url": "https://github.com/huggingface/datasets/pull/4215", "diff_url": "https://github.com/huggingface/datasets/pull/4215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4215.patch", "merged_at": "2022-05-03T15:48:54" }
true
1,214,572,430
4,214
Skip checksum computation in Imagefolder by default
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-25T14:10:41
2022-05-03T15:28:32
2022-05-03T15:21:29
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
mariosasko
https://github.com/huggingface/datasets/pull/4214
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4214", "html_url": "https://github.com/huggingface/datasets/pull/4214", "diff_url": "https://github.com/huggingface/datasets/pull/4214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4214.patch", "merged_at": "2022-05-03T15:21:29" }
true
1,214,510,010
4,213
ETT time series dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "thank you!\r\n" ]
2022-04-25T13:26:18
2022-05-05T12:19:21
2022-05-05T12:10:35
Ready for review.
kashif
https://github.com/huggingface/datasets/pull/4213
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4213", "html_url": "https://github.com/huggingface/datasets/pull/4213", "diff_url": "https://github.com/huggingface/datasets/pull/4213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4213.patch", "merged_at": "2022-05-05T12:10:35" }
true
1,214,498,582
4,212
[Common Voice] Make sure bytes are correctly deleted if `path` exists
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "cool that you noticed that we store unnecessary bytes again :D " ]
2022-04-25T13:18:26
2022-04-26T22:54:28
2022-04-26T22:48:27
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
patrickvonplaten
https://github.com/huggingface/datasets/pull/4212
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4212", "html_url": "https://github.com/huggingface/datasets/pull/4212", "diff_url": "https://github.com/huggingface/datasets/pull/4212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4212.patch", "merged_at": "2022-04-26T22:48:27" }
true
1,214,361,837
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
closed
[ "Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we u...
2022-04-25T11:22:54
2023-04-06T19:25:50
2022-05-20T15:15:30
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli). In short: I have 3 feature mapping ```python Tri_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), } ) Ent_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]), } ) Con_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]), } ) ``` Then I create different datasets ```python dataset_splits = {} for split in df["split"].unique(): print(split) df_split = df.loc[df["split"] == split].copy() if split in Tri_dataset: df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds = Dataset.from_pandas(df_split, features=Tri_features) elif split in Ent_bin_dataset: df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1}) ds = Dataset.from_pandas(df_split, features=Ent_features) elif split in Con_bin_dataset: df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1}) ds = Dataset.from_pandas(df_split, features=Con_features) else: print("ERROR:", split) dataset_splits[split] = ds datasets = DatasetDict(dataset_splits) ``` I then push to hub ```python datasets.push_to_hub("pietrolesci/robust_nli", token="<token>") ``` Finally, I load it from the hub ```python datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli") ``` And I get that ```python datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features ``` since ```python "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]) ``` gets remapped to ```python "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]) ```
pietrolesci
https://github.com/huggingface/datasets/issues/4211
null
false
1,214,089,130
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
closed
[ "Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\...
2022-04-25T07:28:42
2022-05-31T12:16:31
2022-05-31T12:16:31
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], features = features ``` ERROR: ``` ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None) Value(dtype='string', id=None) Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39 Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 2/2 [00:18<00:00, 8.06s/it] Downloading data: 100% 391M/391M [00:13<00:00, 35.3MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 36.5MB/s] Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 15 frames /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'cmn' ``` while loading without `features` it loads without errors ``` sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'] ) ``` but the `label` col seems to be wrong (without the `ClassLabel` object): ``` sentences['train'].features {'label': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)} ``` The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences Dataset format is: ``` ces Nechci vědět, co je tam uvnitř. ces Kdo o tom chce slyšet? deu Tom sagte, er fühle sich nicht wohl. ber Mel-iyi-d anida-t tura ? hun Gondom lesz rá rögtön. ber Mel-iyi-d anida-tt tura ? deu Ich will dich nicht reden hören. ``` ### Expected behavior ```shell correctly load train and test files. ```
loretoparisi
https://github.com/huggingface/datasets/issues/4210
null
false
1,213,716,426
4,208
Add CMU MoCap Dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple...
2022-04-24T17:31:08
2022-10-03T09:38:24
2022-10-03T09:36:30
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category/subcategory/description as well. I am using a subject to categories, subcategories and description map (metadata file). Currently the loading of the dataset works for "asf/amc" and "avi" formats since they have a single download link. But "c3d" and "mpg" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ? Any suggestions/inputs on this would be helpful. Thank you.
dnaveenr
https://github.com/huggingface/datasets/pull/4208
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4208", "html_url": "https://github.com/huggingface/datasets/pull/4208", "diff_url": "https://github.com/huggingface/datasets/pull/4208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4208.patch", "merged_at": null }
true
1,213,604,615
4,207
[Minor edit] Fix typo in class name
closed
[]
2022-04-24T09:49:37
2022-05-05T13:17:47
2022-05-05T13:17:47
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
cakiki
https://github.com/huggingface/datasets/pull/4207
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4207", "html_url": "https://github.com/huggingface/datasets/pull/4207", "diff_url": "https://github.com/huggingface/datasets/pull/4207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4207.patch", "merged_at": "2022-05-05T13:17:47" }
true
1,212,715,581
4,206
Add Nerval Metric
closed
[ "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
2022-04-22T19:45:00
2023-07-11T09:34:56
2023-07-11T09:34:55
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
maridda
https://github.com/huggingface/datasets/pull/4206
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4206", "html_url": "https://github.com/huggingface/datasets/pull/4206", "diff_url": "https://github.com/huggingface/datasets/pull/4206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4206.patch", "merged_at": null }
true
1,212,466,138
4,205
Fix `convert_file_size_to_int` for kilobits and megabits
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-22T14:56:21
2022-05-03T15:28:42
2022-05-03T15:21:48
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
mariosasko
https://github.com/huggingface/datasets/pull/4205
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4205", "html_url": "https://github.com/huggingface/datasets/pull/4205", "diff_url": "https://github.com/huggingface/datasets/pull/4205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4205.patch", "merged_at": "2022-05-03T15:21:48" }
true
1,212,431,764
4,204
Add Recall Metric Card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "This looks good to me! " ]
2022-04-22T14:24:26
2022-05-03T13:23:23
2022-05-03T13:16:24
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
emibaylor
https://github.com/huggingface/datasets/pull/4204
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4204", "html_url": "https://github.com/huggingface/datasets/pull/4204", "diff_url": "https://github.com/huggingface/datasets/pull/4204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4204.patch", "merged_at": "2022-05-03T13:16:24" }
true
1,212,431,067
4,203
Add Precision Metric Card
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-22T14:23:48
2022-05-03T14:23:40
2022-05-03T14:16:46
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
emibaylor
https://github.com/huggingface/datasets/pull/4203
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4203", "html_url": "https://github.com/huggingface/datasets/pull/4203", "diff_url": "https://github.com/huggingface/datasets/pull/4203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4203.patch", "merged_at": "2022-05-03T14:16:45" }
true
1,212,326,288
4,202
Fix some type annotation in doc
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-22T12:53:31
2022-04-22T15:03:00
2022-04-22T14:56:43
null
thomasw21
https://github.com/huggingface/datasets/pull/4202
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4202", "html_url": "https://github.com/huggingface/datasets/pull/4202", "diff_url": "https://github.com/huggingface/datasets/pull/4202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4202.patch", "merged_at": "2022-04-22T14:56:43" }
true
1,212,086,420
4,201
Update GH template for dataset viewer issues
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "You can see rendering at: https://github.com/huggingface/datasets/blob/6b48fedbdafe12a42c7b6edcecc32820af1a4822/.github/ISSUE_TEMPLATE/dataset-viewer.yml" ]
2022-04-22T09:34:44
2022-05-06T08:38:43
2022-04-26T08:45:55
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/4201
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4201", "html_url": "https://github.com/huggingface/datasets/pull/4201", "diff_url": "https://github.com/huggingface/datasets/pull/4201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4201.patch", "merged_at": "2022-04-26T08:45:55" }
true
1,211,980,110
4,200
Add to docs how to load from local script
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-22T08:08:25
2022-05-06T08:39:25
2022-04-23T05:47:25
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
albertvillanova
https://github.com/huggingface/datasets/pull/4200
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4200", "html_url": "https://github.com/huggingface/datasets/pull/4200", "diff_url": "https://github.com/huggingface/datasets/pull/4200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4200.patch", "merged_at": "2022-04-23T05:47:24" }
true
1,211,953,308
4,199
Cache miss during reload for datasets using image fetch utilities through map
closed
[ "Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache", "Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\...
2022-04-22T07:47:08
2022-04-26T17:00:32
2022-04-26T13:38:26
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ## Steps to reproduce the bug Using the example provided in `red_caps` dataset. ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": get_datasets_user_agent()}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "jellyfish") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 5 dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads}) ``` Run this in an interpretor or as a script twice and see that the cache is missed the second time. ## Expected results At reload there should not be any cache miss ## Actual results Every time script is run, cache is missed and dataset is built from scratch. ## Environment info - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
apsdehal
https://github.com/huggingface/datasets/issues/4199
null
false
1,211,456,559
4,198
There is no dataset
closed
[]
2022-04-21T19:19:26
2022-05-03T11:29:05
2022-04-22T06:12:25
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
wilfoderek
https://github.com/huggingface/datasets/issues/4198
null
false
1,211,342,558
4,197
Add remove_columns=True
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.p...
2022-04-21T17:28:13
2023-09-24T10:02:32
2022-04-22T14:45:30
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, batched=True, remove_columns="text") # crashes because `text` is not in the original columns dset.map(apply, batched=True) # mapped datasets has `text` column ``` In this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore.
thomasw21
https://github.com/huggingface/datasets/pull/4197
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4197", "html_url": "https://github.com/huggingface/datasets/pull/4197", "diff_url": "https://github.com/huggingface/datasets/pull/4197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4197.patch", "merged_at": null }
true
1,211,271,261
4,196
Embed image and audio files in `save_to_disk`
closed
[]
2022-04-21T16:25:18
2022-12-14T18:22:59
2022-12-14T18:22:59
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice: - the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else - users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset - consistency with push_to_hub This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files. cc @mariosasko
lhoestq
https://github.com/huggingface/datasets/issues/4196
null
false
1,210,958,602
4,194
Support lists of multi-dimensional numpy arrays
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-21T12:22:26
2022-05-12T15:16:34
2022-05-12T15:08:40
Fix #4191. CC: @SaulLu
albertvillanova
https://github.com/huggingface/datasets/pull/4194
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4194", "html_url": "https://github.com/huggingface/datasets/pull/4194", "diff_url": "https://github.com/huggingface/datasets/pull/4194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4194.patch", "merged_at": "2022-05-12T15:08:40" }
true
1,210,734,701
4,193
Document save_to_disk and push_to_hub on images and audio files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Good catch, I updated the docstrings" ]
2022-04-21T09:04:36
2022-04-22T09:55:55
2022-04-22T09:49:31
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
lhoestq
https://github.com/huggingface/datasets/pull/4193
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4193", "html_url": "https://github.com/huggingface/datasets/pull/4193", "diff_url": "https://github.com/huggingface/datasets/pull/4193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4193.patch", "merged_at": "2022-04-22T09:49:31" }
true
1,210,692,554
4,192
load_dataset can't load local dataset,Unable to find ...
closed
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json...
2022-04-21T08:28:58
2022-04-25T16:51:57
2022-04-22T07:39:53
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
ahf876828330
https://github.com/huggingface/datasets/issues/4192
null
false
1,210,028,090
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
closed
[ "Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `da...
2022-04-20T18:04:32
2022-05-12T15:08:40
2022-05-12T15:08:40
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the following toy dataset t: ```python import numpy as np from datasets import Dataset, features data_map = { 1: np.array([[0.2, 0,4],[0.19, 0,3]]), 2: np.array([[0.1, 0,4],[0.19, 0,3]]), } def create_toy_ds(): my_dict = {"id":[1, 2]} return Dataset.from_dict(my_dict) ds = create_toy_ds() ``` The following 2D processing works without any errors raised: ```python def prepare_dataset_2D(batch): batch["pixel_values"] = [data_map[index] for index in batch["id"]] return batch ds_2D = ds.map( prepare_dataset_2D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")}) ) ``` The following 3D processing doesn't work: ```python def prepare_dataset_3D(batch): batch["pixel_values"] = [[data_map[index]] for index in batch["id"]] return batch ds_3D = ds.map( prepare_dataset_3D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")}) ) ``` The error raised is: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>() 3 batched=True, 4 remove_columns=ds.column_names, ----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) 6 ) 12 frames [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1971 new_fingerprint=new_fingerprint, 1972 disable_tqdm=disable_tqdm, -> 1973 desc=desc, 1974 ) 1975 else: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 518 self: "Dataset" = kwargs.pop("self") 519 # apply actual function --> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 522 for dataset in datasets: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 485 } 486 # apply actual function --> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 489 # re-apply format to the output [/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 456 # Call actual function 457 --> 458 out = func(self, *args, **kwargs) 459 460 # Update fingerprint of in-place transforms + update in-place history of transforms [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2354 writer.write_table(batch) 2355 else: -> 2356 writer.write_batch(batch) 2357 if update_data and writer is not None: 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size) 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 507 arrays.append(pa.array(typed_sequence)) 508 inferred_features[col] = typed_sequence.get_inferred_type() 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type) 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type) 176 else: --> 177 storage = pa.array(data, pa_type.storage_dtype) 178 return pa.ExtensionArray.from_storage(pa_type, storage) 179 /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` **Describe the solution you'd like** No error in the second scenario and an identical result to the following snippets. **Describe alternatives you've considered** There are other alternatives that work such as: ```python def prepare_dataset_3D_bis(batch): batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]] return batch ds_3D_bis = ds.map( prepare_dataset_3D_bis, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` or ```python def prepare_dataset_3D_ter(batch): batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]] return batch ds_3D_ter = ds.map( prepare_dataset_3D_ter, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` But both solutions require the user to be aware that `data_map[index]` is an `np.array` type. cc @lhoestq as we discuss this offline :smile:
SaulLu
https://github.com/huggingface/datasets/issues/4191
null
false
1,209,901,677
4,190
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-20T16:08:01
2022-04-22T13:58:25
2022-04-22T13:52:00
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/src/transformers/modeling_utils.py#L1350).
mariosasko
https://github.com/huggingface/datasets/pull/4190
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4190", "html_url": "https://github.com/huggingface/datasets/pull/4190", "diff_url": "https://github.com/huggingface/datasets/pull/4190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4190.patch", "merged_at": "2022-04-22T13:52:00" }
true
1,209,881,351
4,189
Document how to use FAISS index for special operations
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-20T15:51:56
2022-05-06T08:43:10
2022-05-06T08:35:52
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
albertvillanova
https://github.com/huggingface/datasets/pull/4189
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4189", "html_url": "https://github.com/huggingface/datasets/pull/4189", "diff_url": "https://github.com/huggingface/datasets/pull/4189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4189.patch", "merged_at": "2022-05-06T08:35:52" }
true
1,209,740,957
4,188
Support streaming cnn_dailymail dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?" ]
2022-04-20T14:04:36
2022-05-11T13:39:06
2022-04-20T15:52:49
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/4188
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4188", "html_url": "https://github.com/huggingface/datasets/pull/4188", "diff_url": "https://github.com/huggingface/datasets/pull/4188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4188.patch", "merged_at": "2022-04-20T15:52:49" }
true
1,209,721,532
4,187
Don't duplicate data when encoding audio or image
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for a...
2022-04-20T13:50:37
2022-04-21T09:17:00
2022-04-21T09:10:47
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio). cc @patrickvonplaten
lhoestq
https://github.com/huggingface/datasets/pull/4187
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4187", "html_url": "https://github.com/huggingface/datasets/pull/4187", "diff_url": "https://github.com/huggingface/datasets/pull/4187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4187.patch", "merged_at": "2022-04-21T09:10:47" }
true
1,209,463,599
4,186
Fix outdated docstring about default dataset config
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-20T10:04:51
2022-04-22T12:54:44
2022-04-22T12:48:31
null
lhoestq
https://github.com/huggingface/datasets/pull/4186
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4186", "html_url": "https://github.com/huggingface/datasets/pull/4186", "diff_url": "https://github.com/huggingface/datasets/pull/4186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4186.patch", "merged_at": "2022-04-22T12:48:31" }
true
1,209,429,743
4,185
Librispeech documentation, clarification on format
open
[ "(@patrickvonplaten )", "Also cc @lhoestq here", "The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is d...
2022-04-20T09:35:55
2022-04-21T11:00:53
null
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audio > file to a float32 array, please make use of the `.map()` function as follows: > > ```python > import soundfile as sf > def map_to_array(batch): > speech_array, _ = sf.read(batch["file"]) > batch["speech"] = speech_array > return batch > dataset = dataset.map(map_to_array, remove_columns=["file"]) > ``` Is this still true? In my case, `ds["train.100"]` returns: ``` Dataset({ features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'], num_rows: 28539 }) ``` and taking the first instance yields: ``` {'file': '374-180298-0000.flac', 'audio': {'path': '374-180298-0000.flac', 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED', 'speaker_id': 374, 'chapter_id': 180298, 'id': '374-180298-0000'} ``` The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong? But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk? Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk? A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?
albertz
https://github.com/huggingface/datasets/issues/4185
null
false
1,208,592,669
4,184
[Librispeech] Add 'all' config
closed
[ "Fix https://github.com/huggingface/datasets/issues/4179", "_The documentation is not available anymore as the PR was closed or merged._", "Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do st...
2022-04-19T16:27:56
2024-08-02T05:03:04
2022-04-22T09:45:17
Add `"all"` config to Librispeech Closed #4179
patrickvonplaten
https://github.com/huggingface/datasets/pull/4184
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4184", "html_url": "https://github.com/huggingface/datasets/pull/4184", "diff_url": "https://github.com/huggingface/datasets/pull/4184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4184.patch", "merged_at": "2022-04-22T09:45:17" }
true