id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,227,592,826
4,290
Update paper link in medmcqa dataset card
Updating readme in medmcqa dataset.
closed
https://github.com/huggingface/datasets/pull/4290
2022-05-06T08:52:51
2022-09-30T11:51:28
2022-09-30T11:49:07
{ "login": "monk1337", "id": 17107749, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,226,821,732
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗
closed
https://github.com/huggingface/datasets/pull/4288
2022-05-05T15:21:49
2022-05-10T12:55:06
2022-05-10T12:09:48
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,226,806,652
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('crime_and_punish', split='train[:100]') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None` ``` ## Expected results A new column named `embeddings` in the dataset that we're adding the index to. ## Actual results An exception is triggered with the following message `NameError: name 'faiss' is not defined`. ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4287
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,226,758,621
4,286
Add Lahnda language tag
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
closed
https://github.com/huggingface/datasets/pull/4286
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,226,374,831
4,285
Update LexGLUE README.md
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
closed
https://github.com/huggingface/datasets/pull/4285
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
1,226,200,727
4,284
Issues in processing very large datasets
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. Here are my modifications to `run_summarization.py` code. ``` # loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph graph_data_train = get_graph_data('train') graph_data_validation = get_graph_data('val') ... ... with training_args.main_process_first(desc="train dataset map pre-processing"): train_dataset = train_dataset.map( preprocess_function_train, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) ``` And here is the modified preprocessed function: ``` def preprocess_function_train(examples): inputs, targets, sub_graphs, ids = [], [], [], [] for i in range(len(examples[text_column])): if examples[text_column][i] is not None and examples[summary_column][i] is not None: # if examples['doc_id'][i] in graph_data.keys(): inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) sub_graphs.append(graph_data_train[examples['id'][i]]) ids.append(examples['id'][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True, sub_graphs=sub_graphs, ids=ids) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and data_args.ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] return model_inputs ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/4284
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
{ "login": "sajastu", "id": 10419055, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,225,686,988
4,283
Fix filesystem docstring
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
closed
https://github.com/huggingface/datasets/pull/4283
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,225,616,545
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676 This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type. In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary. I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # before: # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] # # now: # b # 0 [None, [0]] # 1 [None, [0]] # 2 [None, [0]] # 3 [None, [0]] ``` cc @sgugger
closed
https://github.com/huggingface/datasets/pull/4282
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,225,556,939
4,281
Remove a copy-paste sentence in dataset cards
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
closed
https://github.com/huggingface/datasets/pull/4281
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,446,844
4,280
Add missing features to commonsense_qa dataset
Fix partially #4275.
closed
https://github.com/huggingface/datasets/pull/4280
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,300,273
4,279
Update minimal PyArrow version warning
Update the minimal PyArrow version warning (should've been part of #4250).
closed
https://github.com/huggingface/datasets/pull/4279
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,225,122,123
4,278
Add missing features to openbookqa dataset for additional config
Fix partially #4276.
closed
https://github.com/huggingface/datasets/pull/4278
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,002,286
4,277
Enable label alignment for token classification datasets
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0, 7, 0, 0] ner_ds[0]["ner_tags"] # hypothetical model mapping with O <--> B-LOC label2id = { "B-LOC": "0", "B-MISC": "7", "B-ORG": "3", "B-PER": "1", "I-LOC": "6", "I-MISC": "8", "I-ORG": "4", "I-PER": "2", "O": "5" } ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags") # returns [3, 5, 7, 5, 5, 5, 7, 5, 5] ner_aligned_ds[0]["ner_tags"] ``` Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur
closed
https://github.com/huggingface/datasets/pull/4277
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,224,949,252
4,276
OpenBookQA has missing and inconsistent field names
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4276
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
{ "login": "vblagoje", "id": 458335, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,224,943,414
4,275
CommonSenseQA has missing and inconsistent field names
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it 3. Add the missing "question_concept" field in the question tree node 4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original ## Expected results Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
open
https://github.com/huggingface/datasets/issues/4275
2022-05-04T05:38:59
2022-05-04T11:41:18
null
{ "login": "vblagoje", "id": 458335, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,224,740,303
4,274
Add API code examples for IterableDataset
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
closed
https://github.com/huggingface/datasets/pull/4274
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,224,681,036
4,273
leadboard info added for TNE
null
closed
https://github.com/huggingface/datasets/pull/4273
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
{ "login": "yanaiela", "id": 8031035, "type": "User" }
[]
true
[]
1,224,635,660
4,272
Fix typo in logging docs
This PR fixes #4271.
closed
https://github.com/huggingface/datasets/pull/4272
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,224,404,403
4,271
A typo in docs of datasets.disable_progress_bar
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
closed
https://github.com/huggingface/datasets/issues/4271
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,224,244,460
4,270
Fix style in openbookqa dataset
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
closed
https://github.com/huggingface/datasets/pull/4270
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,865,145
4,269
Add license and point of contact to big_patent dataset
Update metadata of big_patent dataset with: - license - point of contact
closed
https://github.com/huggingface/datasets/pull/4269
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,331,964
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/4268
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
{ "login": "i-am-neo", "id": 102043285, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,223,214,275
4,267
Replace data URL in SAMSum dataset within the same repository
Replace data URL with one in the same repository.
closed
https://github.com/huggingface/datasets/pull/4267
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,116,436
4,266
Add HF Speech Bench to Librispeech Dataset Card
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions? cc @patrickvonplaten: more leaderboard promotion!
closed
https://github.com/huggingface/datasets/pull/4266
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[]
true
[]
1,222,723,083
4,263
Rename imagenet2012 -> imagenet-1k
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub. EDIT: to complete the rationale on why we should name it `imagenet-1k`: If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they - wanted to make it explicit that it’s not 21k -> the distinction is important for the community - or they have been following this convention from other models -> the convention implicitly exists already
closed
https://github.com/huggingface/datasets/pull/4263
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,222,130,749
4,262
Add YAML tags to Dataset Card rotten tomatoes
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
closed
https://github.com/huggingface/datasets/pull/4262
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
{ "login": "mo6zes", "id": 10004251, "type": "User" }
[]
true
[]
1,221,883,779
4,261
data leakage in `webis/conclugen` dataset
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```python from datasets import load_dataset training = load_dataset("webis/conclugen", "base", split="train") validation = load_dataset("webis/conclugen", "base", split="validation") testing = load_dataset("webis/conclugen", "base", split="test") # collect which sample id's are present in the training split ids_validation = list() ids_testing = list() for train_sample in training: train_argument = train_sample["argument"] train_conclusion = train_sample["conclusion"] train_id = train_sample["id"] # test if current sample is in validation split if train_argument in validation["argument"]: for validation_sample in validation: validation_argument = validation_sample["argument"] validation_conclusion = validation_sample["conclusion"] validation_id = validation_sample["id"] if train_argument == validation_argument and train_conclusion == validation_conclusion: ids_validation.append(validation_id) # test if current sample is in test split if train_argument in testing["argument"]: for testing_sample in testing: testing_argument = testing_sample["argument"] testing_conclusion = testing_sample["conclusion"] testing_id = testing_sample["id"] if train_argument == testing_argument and train_conclusion == testing_conclusion: ids_testing.append(testing_id) ``` ## Expected results Length of both lists `ids_validation` and `ids_testing` should be zero. ## Actual results Length of `ids_validation` = `2556` Length of `ids_testing` = `287` Furthermore, there seems to be duplicate samples in (at least) the *training* split, since: `print(len(set(ids_validation)))` = `950` `print(len(set(ids_testing)))` = `101` All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/4261
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
{ "login": "xflashxx", "id": 54585776, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,221,830,292
4,260
Add mr_polarity movie review sentiment classification
Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative". Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/) paperswithcode: [https://paperswithcode.com/dataset/mr](https://paperswithcode.com/dataset/mr) - [ ] I was not able to generate dummy data, the original dataset files have ".pos" and ".neg" as file extensions so the auto-generator does not work. Is it fine like this or should dummy data be added?
closed
https://github.com/huggingface/datasets/pull/4260
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
{ "login": "mo6zes", "id": 10004251, "type": "User" }
[]
true
[]
1,221,768,025
4,259
Fix bug in choices labels in openbookqa dataset
This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550. Fix #3550. cc. @lhoestq @mariosasko
closed
https://github.com/huggingface/datasets/pull/4259
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
1,221,637,727
4,258
Fix/start token mask issue and update documentation
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
closed
https://github.com/huggingface/datasets/pull/4258
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
{ "login": "TristanThrush", "id": 20826878, "type": "User" }
[]
true
[]
1,221,393,137
4,257
Create metric card for Mahalanobis Distance
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
closed
https://github.com/huggingface/datasets/pull/4257
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,221,379,625
4,256
Create metric card for MSE
Proposing a metric card for Mean Squared Error
closed
https://github.com/huggingface/datasets/pull/4256
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,221,142,899
4,255
No google drive URL for pubmed_qa
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
closed
https://github.com/huggingface/datasets/pull/4255
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,220,204,395
4,254
Replace data URL in SAMSum dataset and support streaming
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4254
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,219,286,408
4,253
Create metric cards for mean IOU
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
closed
https://github.com/huggingface/datasets/pull/4253
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,151,100
4,252
Creating metric card for MAE
Initial proposal for MAE metric card
closed
https://github.com/huggingface/datasets/pull/4252
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,116,354
4,251
Metric card for the XTREME-S dataset
Proposing a metric card for the XTREME-S dataset :hugs:
closed
https://github.com/huggingface/datasets/pull/4251
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,093,830
4,250
Bump PyArrow Version to 6
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
closed
https://github.com/huggingface/datasets/pull/4250
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,218,524,424
4,249
Support streaming XGLUE dataset
Support streaming XGLUE dataset. Fix #4247. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4249
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,218,460,444
4,248
conll2003 dataset loads original data.
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset dataset = load_dataset("conll2003") ``` ## Expected results { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ## Actual results ```python print(dataset) DatasetDict({ train: Dataset({ features: ['text'], num_rows: 219554 }) test: Dataset({ features: ['text'], num_rows: 50350 }) validation: Dataset({ features: ['text'], num_rows: 55044 }) }) ``` ```python for i in range(20): print(dataset['train'][i]) {'text': '-DOCSTART- -X- -X- O'} {'text': ''} {'text': 'EU NNP B-NP B-ORG'} {'text': 'rejects VBZ B-VP O'} {'text': 'German JJ B-NP B-MISC'} {'text': 'call NN I-NP O'} {'text': 'to TO B-VP O'} {'text': 'boycott VB I-VP O'} {'text': 'British JJ B-NP B-MISC'} {'text': 'lamb NN I-NP O'} {'text': '. . O O'} {'text': ''} {'text': 'Peter NNP B-NP B-PER'} {'text': 'Blackburn NNP I-NP I-PER'} {'text': ''} {'text': 'BRUSSELS NNP B-NP B-LOC'} {'text': '1996-08-22 CD I-NP O'} {'text': ''} {'text': 'The DT B-NP O'} {'text': 'European NNP I-NP B-ORG'} ```
closed
https://github.com/huggingface/datasets/issues/4248
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
{ "login": "sue991", "id": 26458611, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,218,320,882
4,247
The data preview of XGLUE
It seems that something wrong with the data previvew of XGLUE
closed
https://github.com/huggingface/datasets/issues/4247
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
{ "login": "czq1999", "id": 49108847, "type": "User" }
[]
false
[]
1,218,320,293
4,246
Support to load dataset with TSV files by passing only dataset name
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
closed
https://github.com/huggingface/datasets/pull/4246
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,217,959,400
4,245
Add code examples for DatasetDict
This PR adds code examples for `DatasetDict` in the API reference :)
closed
https://github.com/huggingface/datasets/pull/4245
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,217,732,221
4,244
task id update
changed multi input text classification as task id instead of category
closed
https://github.com/huggingface/datasets/pull/4244
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,217,689,909
4,243
WIP: Initial shades loading script and readme
null
closed
https://github.com/huggingface/datasets/pull/4243
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
{ "login": "shayne-longpre", "id": 69018523, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,217,665,960
4,242
Update auth when mirroring datasets on the hub
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
closed
https://github.com/huggingface/datasets/pull/4242
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,217,423,686
4,241
NonMatchingChecksumError when attempting to download GLUE
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to download without an error. ## Actual results ``` INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports. INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805 Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0... Downloading: 100%|██████████| 73.0/73.0 [00:00<00:00, 73.9kB/s] INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-7-669a8343dcc1> in <module> ----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 458 # Checksums verification 459 if verify_infos: --> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 461 for split_generator in split_generators: 462 if str(split_generator.split_info.name).lower() == "all": ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa - Python version: 3.6.13 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
closed
https://github.com/huggingface/datasets/issues/4241
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
{ "login": "drussellmrichie", "id": 9650729, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,217,287,594
4,240
Fix yield for crd3
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": datasets.features.Sequence(datasets.Value("string")), "number": datasets.Value("int32"), } ], } ``` I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this?
closed
https://github.com/huggingface/datasets/pull/4240
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
{ "login": "shanyas10", "id": 21066979, "type": "User" }
[]
true
[]
1,217,269,689
4,239
Small fixes in ROC AUC docs
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
closed
https://github.com/huggingface/datasets/pull/4239
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
{ "login": "wschella", "id": 9478856, "type": "User" }
[]
true
[]
1,217,168,123
4,238
Dataset caching policy
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally: ```python from datasets import load_dataset_builder dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences") print(dataset_builder.cache_dir) print(dataset_builder.info.features) print(dataset_builder.info.splits) ``` ``` Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519 None None ``` and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`. Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last. Thank you. ## Steps to reproduce the bug ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], ) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() ``` ## Expected results Properly tokenize dataset file `test.csv` without issues. ## Actual results Specify the actual results or traceback. ``` Downloading data files: 100% 2/2 [00:16<00:00, 7.34s/it] Downloading data: 100% 391M/391M [00:12<00:00, 36.6MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 40.0MB/s] Extracting data files: 100% 2/2 [00:00<00:00, 47.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 2/2 [00:00<00:00, 25.94it/s] 11% 942339/8256449 [01:55<13:11, 9245.85ex/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>() 12 ) 13 # You can make this part faster with num_proc=<some int> ---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) 15 sentences = sentences.shuffle() 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 - ``` ``` - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ```
closed
https://github.com/huggingface/datasets/issues/4238
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
{ "login": "loretoparisi", "id": 163333, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,217,121,044
4,237
Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
closed
https://github.com/huggingface/datasets/issues/4237
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,217,115,691
4,236
Replace data URL in big_patent dataset and support streaming
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
closed
https://github.com/huggingface/datasets/pull/4236
2022-04-27T10:01:13
2022-06-10T08:10:55
2022-05-02T18:21:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,216,952,640
4,235
How to load VERY LARGE dataset?
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. ``` ### Who can help? Trainer: @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html. Thanks a lot! ```
closed
https://github.com/huggingface/datasets/issues/4235
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
{ "login": "CaoYiqingT", "id": 45160643, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,216,818,846
4,234
Autoeval config
Added autoeval config to imdb as pilot
closed
https://github.com/huggingface/datasets/pull/4234
2022-04-27T05:32:10
2022-05-06T13:20:31
2022-05-05T18:20:58
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,665,044
4,233
Autoeval
null
closed
https://github.com/huggingface/datasets/pull/4233
2022-04-27T01:32:09
2022-04-27T05:29:30
2022-04-27T01:32:23
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,659,444
4,232
adding new tag to tasks.json and modified for existing datasets
null
closed
https://github.com/huggingface/datasets/pull/4232
2022-04-27T01:21:09
2022-05-03T14:23:56
2022-05-03T14:16:39
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,651,960
4,231
Fix invalid url to CC-Aligned dataset
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
closed
https://github.com/huggingface/datasets/pull/4231
2022-04-27T01:07:01
2022-05-16T17:01:13
2022-05-16T16:53:12
{ "login": "juntang-zhuang", "id": 44451229, "type": "User" }
[]
true
[]
1,216,643,661
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
closed
https://github.com/huggingface/datasets/issues/4230
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
{ "login": "beyondguo", "id": 37113676, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,216,638,968
4,229
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
closed
https://github.com/huggingface/datasets/pull/4229
2022-04-27T00:47:08
2022-04-27T00:48:28
2022-04-27T00:48:17
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,523,043
4,228
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
closed
https://github.com/huggingface/datasets/pull/4228
2022-04-26T22:00:33
2022-04-27T00:48:31
2022-04-27T00:46:31
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,455,316
4,227
Add f1 metric card, update docstring in py file
null
closed
https://github.com/huggingface/datasets/pull/4227
2022-04-26T20:41:03
2022-05-03T12:50:23
2022-05-03T12:43:33
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,331,073
4,226
Add pearsonr mc, update functionality to match the original docs
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
closed
https://github.com/huggingface/datasets/pull/4226
2022-04-26T18:30:46
2022-05-03T17:09:24
2022-05-03T17:02:28
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,213,464
4,225
autoeval config
add train eval index for autoeval
closed
https://github.com/huggingface/datasets/pull/4225
2022-04-26T16:38:34
2022-04-27T00:48:31
2022-04-26T22:00:26
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,209,667
4,224
autoeval config
add train eval index for autoeval
closed
https://github.com/huggingface/datasets/pull/4224
2022-04-26T16:35:19
2022-04-26T16:36:45
2022-04-26T16:36:45
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,107,082
4,223
Add Accuracy Metric Card
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
closed
https://github.com/huggingface/datasets/pull/4223
2022-04-26T15:10:46
2022-05-03T14:27:45
2022-05-03T14:20:47
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,056,439
4,222
Fix description links in dataset cards
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
closed
https://github.com/huggingface/datasets/pull/4222
2022-04-26T14:36:25
2022-05-06T08:38:38
2022-04-26T16:52:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,215,911,182
4,221
Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
closed
https://github.com/huggingface/datasets/issues/4221
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
{ "login": "jordiae", "id": 2944532, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
1,215,225,802
4,220
Altered faiss installation comment
null
closed
https://github.com/huggingface/datasets/pull/4220
2022-04-26T01:20:43
2022-05-09T17:29:34
2022-05-09T17:22:09
{ "login": "vishalsrao", "id": 36671559, "type": "User" }
[]
true
[]
1,214,934,025
4,219
Add F1 Metric Card
null
closed
https://github.com/huggingface/datasets/pull/4219
2022-04-25T19:14:56
2022-04-26T20:44:18
2022-04-26T20:37:46
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,214,748,226
4,218
Make code for image downloading from image urls cacheable
Fix #4199
closed
https://github.com/huggingface/datasets/pull/4218
2022-04-25T16:17:59
2022-04-26T17:00:24
2022-04-26T13:38:26
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,688,141
4,217
Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/4217
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
{ "login": "Matthew-Larsen", "id": 54189843, "type": "User" }
[ { "name": "hosted-on-google-drive", "color": "8B51EF" } ]
false
[]
1,214,614,029
4,216
Avoid recursion error in map if example is returned as dict value
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": ex}) ``` and this is the fix for it (before this PR): ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: {"translation": dict(ex)}) ``` Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries. P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks.
closed
https://github.com/huggingface/datasets/pull/4216
2022-04-25T14:40:32
2022-05-04T17:20:06
2022-05-04T17:12:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,579,162
4,215
Add `drop_last_batch` to `IterableDataset.map`
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
closed
https://github.com/huggingface/datasets/pull/4215
2022-04-25T14:15:19
2022-05-03T15:56:07
2022-05-03T15:48:54
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,572,430
4,214
Skip checksum computation in Imagefolder by default
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
closed
https://github.com/huggingface/datasets/pull/4214
2022-04-25T14:10:41
2022-05-03T15:28:32
2022-05-03T15:21:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,510,010
4,213
ETT time series dataset
Ready for review.
closed
https://github.com/huggingface/datasets/pull/4213
2022-04-25T13:26:18
2022-05-05T12:19:21
2022-05-05T12:10:35
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,214,498,582
4,212
[Common Voice] Make sure bytes are correctly deleted if `path` exists
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
closed
https://github.com/huggingface/datasets/pull/4212
2022-04-25T13:18:26
2022-04-26T22:54:28
2022-04-26T22:48:27
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,214,361,837
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli). In short: I have 3 feature mapping ```python Tri_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), } ) Ent_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]), } ) Con_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]), } ) ``` Then I create different datasets ```python dataset_splits = {} for split in df["split"].unique(): print(split) df_split = df.loc[df["split"] == split].copy() if split in Tri_dataset: df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds = Dataset.from_pandas(df_split, features=Tri_features) elif split in Ent_bin_dataset: df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1}) ds = Dataset.from_pandas(df_split, features=Ent_features) elif split in Con_bin_dataset: df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1}) ds = Dataset.from_pandas(df_split, features=Con_features) else: print("ERROR:", split) dataset_splits[split] = ds datasets = DatasetDict(dataset_splits) ``` I then push to hub ```python datasets.push_to_hub("pietrolesci/robust_nli", token="<token>") ``` Finally, I load it from the hub ```python datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli") ``` And I get that ```python datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features ``` since ```python "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]) ``` gets remapped to ```python "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]) ```
closed
https://github.com/huggingface/datasets/issues/4211
2022-04-25T11:22:54
2023-04-06T19:25:50
2022-05-20T15:15:30
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,214,089,130
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], features = features ``` ERROR: ``` ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None) Value(dtype='string', id=None) Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39 Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 2/2 [00:18<00:00, 8.06s/it] Downloading data: 100% 391M/391M [00:13<00:00, 35.3MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 36.5MB/s] Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 15 frames /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'cmn' ``` while loading without `features` it loads without errors ``` sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'] ) ``` but the `label` col seems to be wrong (without the `ClassLabel` object): ``` sentences['train'].features {'label': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)} ``` The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences Dataset format is: ``` ces Nechci vědět, co je tam uvnitř. ces Kdo o tom chce slyšet? deu Tom sagte, er fühle sich nicht wohl. ber Mel-iyi-d anida-t tura ? hun Gondom lesz rá rögtön. ber Mel-iyi-d anida-tt tura ? deu Ich will dich nicht reden hören. ``` ### Expected behavior ```shell correctly load train and test files. ```
closed
https://github.com/huggingface/datasets/issues/4210
2022-04-25T07:28:42
2022-05-31T12:16:31
2022-05-31T12:16:31
{ "login": "loretoparisi", "id": 163333, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,213,716,426
4,208
Add CMU MoCap Dataset
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category/subcategory/description as well. I am using a subject to categories, subcategories and description map (metadata file). Currently the loading of the dataset works for "asf/amc" and "avi" formats since they have a single download link. But "c3d" and "mpg" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ? Any suggestions/inputs on this would be helpful. Thank you.
closed
https://github.com/huggingface/datasets/pull/4208
2022-04-24T17:31:08
2022-10-03T09:38:24
2022-10-03T09:36:30
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,213,604,615
4,207
[Minor edit] Fix typo in class name
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
closed
https://github.com/huggingface/datasets/pull/4207
2022-04-24T09:49:37
2022-05-05T13:17:47
2022-05-05T13:17:47
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
1,212,715,581
4,206
Add Nerval Metric
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
closed
https://github.com/huggingface/datasets/pull/4206
2022-04-22T19:45:00
2023-07-11T09:34:56
2023-07-11T09:34:55
{ "login": "maridda", "id": 49372461, "type": "User" }
[ { "name": "transfer-to-evaluate", "color": "E3165C" } ]
true
[]
1,212,466,138
4,205
Fix `convert_file_size_to_int` for kilobits and megabits
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
closed
https://github.com/huggingface/datasets/pull/4205
2022-04-22T14:56:21
2022-05-03T15:28:42
2022-05-03T15:21:48
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,212,431,764
4,204
Add Recall Metric Card
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
closed
https://github.com/huggingface/datasets/pull/4204
2022-04-22T14:24:26
2022-05-03T13:23:23
2022-05-03T13:16:24
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,212,431,067
4,203
Add Precision Metric Card
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
closed
https://github.com/huggingface/datasets/pull/4203
2022-04-22T14:23:48
2022-05-03T14:23:40
2022-05-03T14:16:46
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,212,326,288
4,202
Fix some type annotation in doc
null
closed
https://github.com/huggingface/datasets/pull/4202
2022-04-22T12:53:31
2022-04-22T15:03:00
2022-04-22T14:56:43
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,212,086,420
4,201
Update GH template for dataset viewer issues
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4201
2022-04-22T09:34:44
2022-05-06T08:38:43
2022-04-26T08:45:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,211,980,110
4,200
Add to docs how to load from local script
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
closed
https://github.com/huggingface/datasets/pull/4200
2022-04-22T08:08:25
2022-05-06T08:39:25
2022-04-23T05:47:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,211,953,308
4,199
Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ## Steps to reproduce the bug Using the example provided in `red_caps` dataset. ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": get_datasets_user_agent()}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "jellyfish") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 5 dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads}) ``` Run this in an interpretor or as a script twice and see that the cache is missed the second time. ## Expected results At reload there should not be any cache miss ## Actual results Every time script is run, cache is missed and dataset is built from scratch. ## Environment info - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
closed
https://github.com/huggingface/datasets/issues/4199
2022-04-22T07:47:08
2022-04-26T17:00:32
2022-04-26T13:38:26
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,211,456,559
4,198
There is no dataset
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/4198
2022-04-21T19:19:26
2022-05-03T11:29:05
2022-04-22T06:12:25
{ "login": "wilfoderek", "id": 1625647, "type": "User" }
[]
false
[]
1,211,342,558
4,197
Add remove_columns=True
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, batched=True, remove_columns="text") # crashes because `text` is not in the original columns dset.map(apply, batched=True) # mapped datasets has `text` column ``` In this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore.
closed
https://github.com/huggingface/datasets/pull/4197
2022-04-21T17:28:13
2023-09-24T10:02:32
2022-04-22T14:45:30
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,211,271,261
4,196
Embed image and audio files in `save_to_disk`
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice: - the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else - users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset - consistency with push_to_hub This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files. cc @mariosasko
closed
https://github.com/huggingface/datasets/issues/4196
2022-04-21T16:25:18
2022-12-14T18:22:59
2022-12-14T18:22:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,210,958,602
4,194
Support lists of multi-dimensional numpy arrays
Fix #4191. CC: @SaulLu
closed
https://github.com/huggingface/datasets/pull/4194
2022-04-21T12:22:26
2022-05-12T15:16:34
2022-05-12T15:08:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,210,734,701
4,193
Document save_to_disk and push_to_hub on images and audio files
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
closed
https://github.com/huggingface/datasets/pull/4193
2022-04-21T09:04:36
2022-04-22T09:55:55
2022-04-22T09:49:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,210,692,554
4,192
load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
closed
https://github.com/huggingface/datasets/issues/4192
2022-04-21T08:28:58
2022-04-25T16:51:57
2022-04-22T07:39:53
{ "login": "ahf876828330", "id": 33253979, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,210,028,090
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the following toy dataset t: ```python import numpy as np from datasets import Dataset, features data_map = { 1: np.array([[0.2, 0,4],[0.19, 0,3]]), 2: np.array([[0.1, 0,4],[0.19, 0,3]]), } def create_toy_ds(): my_dict = {"id":[1, 2]} return Dataset.from_dict(my_dict) ds = create_toy_ds() ``` The following 2D processing works without any errors raised: ```python def prepare_dataset_2D(batch): batch["pixel_values"] = [data_map[index] for index in batch["id"]] return batch ds_2D = ds.map( prepare_dataset_2D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")}) ) ``` The following 3D processing doesn't work: ```python def prepare_dataset_3D(batch): batch["pixel_values"] = [[data_map[index]] for index in batch["id"]] return batch ds_3D = ds.map( prepare_dataset_3D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")}) ) ``` The error raised is: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>() 3 batched=True, 4 remove_columns=ds.column_names, ----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) 6 ) 12 frames [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1971 new_fingerprint=new_fingerprint, 1972 disable_tqdm=disable_tqdm, -> 1973 desc=desc, 1974 ) 1975 else: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 518 self: "Dataset" = kwargs.pop("self") 519 # apply actual function --> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 522 for dataset in datasets: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 485 } 486 # apply actual function --> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 489 # re-apply format to the output [/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 456 # Call actual function 457 --> 458 out = func(self, *args, **kwargs) 459 460 # Update fingerprint of in-place transforms + update in-place history of transforms [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2354 writer.write_table(batch) 2355 else: -> 2356 writer.write_batch(batch) 2357 if update_data and writer is not None: 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size) 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 507 arrays.append(pa.array(typed_sequence)) 508 inferred_features[col] = typed_sequence.get_inferred_type() 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type) 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type) 176 else: --> 177 storage = pa.array(data, pa_type.storage_dtype) 178 return pa.ExtensionArray.from_storage(pa_type, storage) 179 /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` **Describe the solution you'd like** No error in the second scenario and an identical result to the following snippets. **Describe alternatives you've considered** There are other alternatives that work such as: ```python def prepare_dataset_3D_bis(batch): batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]] return batch ds_3D_bis = ds.map( prepare_dataset_3D_bis, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` or ```python def prepare_dataset_3D_ter(batch): batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]] return batch ds_3D_ter = ds.map( prepare_dataset_3D_ter, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` But both solutions require the user to be aware that `data_map[index]` is an `np.array` type. cc @lhoestq as we discuss this offline :smile:
closed
https://github.com/huggingface/datasets/issues/4191
2022-04-20T18:04:32
2022-05-12T15:08:40
2022-05-12T15:08:40
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,209,901,677
4,190
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/src/transformers/modeling_utils.py#L1350).
closed
https://github.com/huggingface/datasets/pull/4190
2022-04-20T16:08:01
2022-04-22T13:58:25
2022-04-22T13:52:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,209,881,351
4,189
Document how to use FAISS index for special operations
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
closed
https://github.com/huggingface/datasets/pull/4189
2022-04-20T15:51:56
2022-05-06T08:43:10
2022-05-06T08:35:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,209,740,957
4,188
Support streaming cnn_dailymail dataset
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4188
2022-04-20T14:04:36
2022-05-11T13:39:06
2022-04-20T15:52:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,209,721,532
4,187
Don't duplicate data when encoding audio or image
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio). cc @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/4187
2022-04-20T13:50:37
2022-04-21T09:17:00
2022-04-21T09:10:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,209,463,599
4,186
Fix outdated docstring about default dataset config
null
closed
https://github.com/huggingface/datasets/pull/4186
2022-04-20T10:04:51
2022-04-22T12:54:44
2022-04-22T12:48:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]